diff --git "a/desc_questions_dataset.csv" "b/desc_questions_dataset.csv" --- "a/desc_questions_dataset.csv" +++ "b/desc_questions_dataset.csv" @@ -1,460 +1,460 @@ Chart;description;Questions -ObesityDataSet_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition FAF <= 2.0 and the second with the condition Height <= 1.72.;['It is clear that variable Age is one of the three most relevant features.', 'The variable TUE seems to be one of the two most relevant features.', 'The variable Age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that CH2O is the first most discriminative variable regarding the class.', 'Variable TUE is one of the most relevant variables.', 'Variable Weight seems to be relevant for the majority of mining tasks.', 'Variables TUE and Age seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 90%.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The variable Age seems to be one of the five most relevant features.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that Naive Bayes algorithm classifies (not A, B), as Obesity_Type_II.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that KNN algorithm classifies (A,B) as Obesity_Type_II for any k ≤ 370.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that KNN algorithm classifies (not A, B) as Obesity_Type_I for any k ≤ 840.'] -ObesityDataSet_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -ObesityDataSet_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -ObesityDataSet_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -ObesityDataSet_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -ObesityDataSet_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] -ObesityDataSet_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 10 and 30%.'] -ObesityDataSet_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables CH2O or NCP can be discarded without losing information.', 'The variable FAF can be discarded without risking losing information.', 'Variables TUE and FAF are redundant, but we can’t say the same for the pair Height and FCVC.', 'Variables Weight and FAF are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Weight seems to be relevant for the majority of mining tasks.', 'Variables Age and Height seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable FAF might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable NCP previously than variable Weight.'] -ObesityDataSet_boxplots.png;A set of boxplots of the variables ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['Variable FCVC is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable TUE shows some outliers, but we can’t be sure of the same for variable NCP.', 'Outliers seem to be a problem in the dataset.', 'Variable FAF shows a high number of outlier values.', 'Variable Age doesn’t have any outliers.', 'Variable TUE presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -ObesityDataSet_histograms_symbolic.png;A set of bar charts of the variables ['CAEC', 'CALC', 'MTRANS', 'Gender', 'family_history_with_overweight', 'FAVC', 'SMOKE', 'SCC'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable FAVC can be seen as ordinal.', 'The variable FAVC can be seen as ordinal without losing information.', 'Considering the common semantics for FAVC and CAEC variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for family_history_with_overweight variable, dummification would be the most adequate encoding.', 'The variable CALC can be coded as ordinal without losing information.', 'Feature generation based on variable MTRANS seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of CAEC seems to be promising.', 'Given the usual semantics of SMOKE variable, dummification would have been a better codification.', 'It is better to drop the variable CAEC than removing all records with missing values.', 'Not knowing the semantics of CALC variable, dummification could have been a more adequate codification.'] +ObesityDataSet_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition FAF <= 2.0 and the second with the condition Height <= 1.72.;['The variable FAF discriminates between the target values, as shown in the decision tree.', 'Variable Height is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 75%.', 'The number of True Negatives reported in the same tree is 50.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The variable FAF seems to be one of the two most relevant features.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that Naive Bayes algorithm classifies (not A, B), as Overweight_Level_I.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], the Decision Tree presented classifies (A, not B) as Obesity_Type_III.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that KNN algorithm classifies (A, not B) as Insufficient_Weight for any k ≤ 160.'] +ObesityDataSet_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] +ObesityDataSet_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] +ObesityDataSet_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] +ObesityDataSet_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] +ObesityDataSet_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] +ObesityDataSet_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 7 principal components would imply an error between 15 and 20%.'] +ObesityDataSet_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables Age or Height can be discarded without losing information.', 'The variable Weight can be discarded without risking losing information.', 'Variables NCP and TUE are redundant, but we can’t say the same for the pair Weight and Height.', 'Variables FAF and TUE are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Height seems to be relevant for the majority of mining tasks.', 'Variables FAF and Height seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable CH2O might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Age previously than variable Height.'] +ObesityDataSet_boxplots.png;A set of boxplots of the variables ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['Variable CH2O is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable FCVC shows some outliers, but we can’t be sure of the same for variable TUE.', 'Outliers seem to be a problem in the dataset.', 'Variable FAF shows some outlier values.', 'Variable NCP doesn’t have any outliers.', 'Variable Height presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +ObesityDataSet_histograms_symbolic.png;A set of bar charts of the variables ['CAEC', 'CALC', 'MTRANS', 'Gender', 'family_history_with_overweight', 'FAVC', 'SMOKE', 'SCC'].;['All variables, but the class, should be dealt with as numeric.', 'The variable SMOKE can be seen as ordinal.', 'The variable FAVC can be seen as ordinal without losing information.', 'Considering the common semantics for FAVC and CAEC variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for family_history_with_overweight variable, dummification would be the most adequate encoding.', 'The variable MTRANS can be coded as ordinal without losing information.', 'Feature generation based on variable family_history_with_overweight seems to be promising.', 'Feature generation based on the use of variable SCC wouldn’t be useful, but the use of CAEC seems to be promising.', 'Given the usual semantics of family_history_with_overweight variable, dummification would have been a better codification.', 'It is better to drop the variable CALC than removing all records with missing values.', 'Not knowing the semantics of family_history_with_overweight variable, dummification could have been a more adequate codification.'] ObesityDataSet_class_histogram.png;A bar chart showing the distribution of the target variable NObeyesdad.;['Balancing this dataset would be mandatory to improve the results.'] -ObesityDataSet_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -ObesityDataSet_histograms_numeric.png;A set of histograms of the variables ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Age can be seen as ordinal.', 'The variable Weight can be seen as ordinal without losing information.', 'Variable NCP is balanced.', 'It is clear that variable FAF shows some outliers, but we can’t be sure of the same for variable FCVC.', 'Outliers seem to be a problem in the dataset.', 'Variable TUE shows a high number of outlier values.', 'Variable FAF doesn’t have any outliers.', 'Variable Height presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for TUE and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Height variable, dummification would be the most adequate encoding.', 'The variable NCP can be coded as ordinal without losing information.', 'Feature generation based on variable Height seems to be promising.', 'Feature generation based on the use of variable FAF wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of FAF variable, dummification would have been a better codification.', 'It is better to drop the variable TUE than removing all records with missing values.', 'Not knowing the semantics of Weight variable, dummification could have been a more adequate codification.'] -customer_segmentation_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Family_Size <= 2.5 and the second with the condition Work_Experience <= 9.5.;['It is clear that variable Work_Experience is one of the four most relevant features.', 'The variable Work_Experience seems to be one of the three most relevant features.', 'The variable Work_Experience discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Work_Experience is the second most discriminative variable regarding the class.', 'Variable Work_Experience is one of the most relevant variables.', 'Variable Work_Experience seems to be relevant for the majority of mining tasks.', 'Variables Work_Experience and Family_Size seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is lower than 60%.', 'The number of False Positives reported in the same tree is 30.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The number of True Negatives reported in the same tree is 10.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (A,B) as D for any k ≤ 11.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], the Decision Tree presented classifies (not A, not B) as C.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (A,B) as A for any k ≤ 249.'] -customer_segmentation_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -customer_segmentation_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -customer_segmentation_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -customer_segmentation_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -customer_segmentation_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -customer_segmentation_pca.png;A bar chart showing the explained variance ratio of 3 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 30%.'] -customer_segmentation_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'Work_Experience', 'Family_Size'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Age or Family_Size can be discarded without losing information.', 'The variable Family_Size can be discarded without risking losing information.', 'Variables Age and Family_Size seem to be useful for classification tasks.', 'Variables Work_Experience and Family_Size are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Family_Size seems to be relevant for the majority of mining tasks.', 'Variables Family_Size and Age seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Family_Size might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Family_Size previously than variable Age.'] -customer_segmentation_boxplots.png;A set of boxplots of the variables ['Age', 'Work_Experience', 'Family_Size'].;['Variable Age is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable Work_Experience.', 'Outliers seem to be a problem in the dataset.', 'Variable Work_Experience shows a high number of outlier values.', 'Variable Work_Experience doesn’t have any outliers.', 'Variable Work_Experience presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -customer_segmentation_histograms_symbolic.png;A set of bar charts of the variables ['Profession', 'Spending_Score', 'Var_1', 'Gender', 'Ever_Married', 'Graduated'].;['All variables, but the class, should be dealt with as date.', 'The variable Spending_Score can be seen as ordinal.', 'The variable Profession can be seen as ordinal without losing information.', 'Considering the common semantics for Profession and Spending_Score variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Var_1 variable, dummification would be the most adequate encoding.', 'The variable Profession can be coded as ordinal without losing information.', 'Feature generation based on variable Var_1 seems to be promising.', 'Feature generation based on the use of variable Profession wouldn’t be useful, but the use of Spending_Score seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable Graduated than removing all records with missing values.', 'Not knowing the semantics of Graduated variable, dummification could have been a more adequate codification.'] -customer_segmentation_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Ever_Married', 'Graduated', 'Profession', 'Work_Experience', 'Family_Size', 'Var_1'].;['Discarding variable Ever_Married would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Graduated seems to be promising.', 'It is better to drop the variable Var_1 than removing all records with missing values.'] +ObesityDataSet_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +ObesityDataSet_histograms_numeric.png;A set of histograms of the variables ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Height can be seen as ordinal.', 'The variable NCP can be seen as ordinal without losing information.', 'Variable FAF is balanced.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable CH2O.', 'Outliers seem to be a problem in the dataset.', 'Variable Height shows a high number of outlier values.', 'Variable TUE doesn’t have any outliers.', 'Variable FCVC presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Weight and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Age variable, dummification would be the most adequate encoding.', 'The variable Weight can be coded as ordinal without losing information.', 'Feature generation based on variable TUE seems to be promising.', 'Feature generation based on the use of variable Weight wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of FAF variable, dummification would have been a better codification.', 'It is better to drop the variable Age than removing all records with missing values.', 'Not knowing the semantics of CH2O variable, dummification could have been a more adequate codification.'] +customer_segmentation_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Family_Size <= 2.5 and the second with the condition Work_Experience <= 9.5.;['The variable Family_Size discriminates between the target values, as shown in the decision tree.', 'Variable Work_Experience is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is lower than 60%.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The recall for the presented tree is higher than its accuracy.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (A,B) as B for any k ≤ 11.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (A, not B) as C for any k ≤ 723.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (not A, B) as B for any k ≤ 524.'] +customer_segmentation_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +customer_segmentation_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] +customer_segmentation_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +customer_segmentation_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] +customer_segmentation_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] +customer_segmentation_pca.png;A bar chart showing the explained variance ratio of 3 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 20%.'] +customer_segmentation_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'Work_Experience', 'Family_Size'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Age or Family_Size can be discarded without losing information.', 'The variable Age can be discarded without risking losing information.', 'Variables Age and Work_Experience seem to be useful for classification tasks.', 'Variables Age and Work_Experience are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Family_Size seems to be relevant for the majority of mining tasks.', 'Variables Family_Size and Work_Experience seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Work_Experience might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Age previously than variable Family_Size.'] +customer_segmentation_boxplots.png;A set of boxplots of the variables ['Age', 'Work_Experience', 'Family_Size'].;['Variable Family_Size is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Work_Experience shows some outliers, but we can’t be sure of the same for variable Family_Size.', 'Outliers seem to be a problem in the dataset.', 'Variable Work_Experience shows a high number of outlier values.', 'Variable Work_Experience doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +customer_segmentation_histograms_symbolic.png;A set of bar charts of the variables ['Profession', 'Spending_Score', 'Var_1', 'Gender', 'Ever_Married', 'Graduated'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Gender can be seen as ordinal.', 'The variable Ever_Married can be seen as ordinal without losing information.', 'Considering the common semantics for Var_1 and Profession variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Profession variable, dummification would be the most adequate encoding.', 'The variable Graduated can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Graduated wouldn’t be useful, but the use of Profession seems to be promising.', 'Given the usual semantics of Profession variable, dummification would have been a better codification.', 'It is better to drop the variable Graduated than removing all records with missing values.', 'Not knowing the semantics of Spending_Score variable, dummification could have been a more adequate codification.'] +customer_segmentation_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Ever_Married', 'Graduated', 'Profession', 'Work_Experience', 'Family_Size', 'Var_1'].;['Discarding variable Var_1 would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 30% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Var_1 seems to be promising.', 'It is better to drop the variable Family_Size than removing all records with missing values.'] customer_segmentation_class_histogram.png;A bar chart showing the distribution of the target variable Segmentation.;['Balancing this dataset would be mandatory to improve the results.'] -customer_segmentation_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -customer_segmentation_histograms_numeric.png;A set of histograms of the variables ['Age', 'Work_Experience', 'Family_Size'].;['All variables, but the class, should be dealt with as date.', 'The variable Family_Size can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Work_Experience is balanced.', 'It is clear that variable Family_Size shows some outliers, but we can’t be sure of the same for variable Work_Experience.', 'Outliers seem to be a problem in the dataset.', 'Variable Work_Experience shows some outlier values.', 'Variable Work_Experience doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Family_Size and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Work_Experience variable, dummification would be the most adequate encoding.', 'The variable Work_Experience can be coded as ordinal without losing information.', 'Feature generation based on variable Family_Size seems to be promising.', 'Feature generation based on the use of variable Family_Size wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of Family_Size variable, dummification would have been a better codification.', 'It is better to drop the variable Work_Experience than removing all records with missing values.', 'Not knowing the semantics of Work_Experience variable, dummification could have been a more adequate codification.'] -urinalysis_tests_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Age <= 0.1 and the second with the condition pH <= 5.5.;['It is clear that variable pH is one of the three most relevant features.', 'The variable Age seems to be one of the four most relevant features.', 'The variable Age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that pH is the first most discriminative variable regarding the class.', 'Variable Specific Gravity is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Specific Gravity and pH seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is lower than 75%.', 'The number of False Positives reported in the same tree is 10.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], it is possible to state that KNN algorithm classifies (A,B) as POSITIVE for any k ≤ 215.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], the Decision Tree presented classifies (not A, B) as POSITIVE.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as NEGATIVE.'] -urinalysis_tests_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -urinalysis_tests_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -urinalysis_tests_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -urinalysis_tests_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] -urinalysis_tests_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] +customer_segmentation_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +customer_segmentation_histograms_numeric.png;A set of histograms of the variables ['Age', 'Work_Experience', 'Family_Size'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Family_Size can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Family_Size is balanced.', 'It is clear that variable Work_Experience shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Age shows some outlier values.', 'Variable Family_Size doesn’t have any outliers.', 'Variable Work_Experience presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Family_Size and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Age variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable Work_Experience seems to be promising.', 'Feature generation based on the use of variable Work_Experience wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable Age than removing all records with missing values.', 'Not knowing the semantics of Family_Size variable, dummification could have been a more adequate codification.'] +urinalysis_tests_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Age <= 0.1 and the second with the condition pH <= 5.5.;['The variable Age discriminates between the target values, as shown in the decision tree.', 'Variable Age is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is higher than 60%.', 'The number of True Positives reported in the same tree is 10.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The recall for the presented tree is lower than its specificity.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], the Decision Tree presented classifies (not A, B) as NEGATIVE.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], the Decision Tree presented classifies (not A, B) as POSITIVE.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], it is possible to state that KNN algorithm classifies (not A, B) as NEGATIVE for any k ≤ 763.'] +urinalysis_tests_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] +urinalysis_tests_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] +urinalysis_tests_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +urinalysis_tests_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] +urinalysis_tests_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] urinalysis_tests_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -urinalysis_tests_pca.png;A bar chart showing the explained variance ratio of 3 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.'] -urinalysis_tests_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'pH', 'Specific Gravity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables pH or Specific Gravity can be discarded without losing information.', 'The variable Specific Gravity can be discarded without risking losing information.', 'Variables pH and Age seem to be useful for classification tasks.', 'Variables pH and Specific Gravity are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Specific Gravity seems to be relevant for the majority of mining tasks.', 'Variables Specific Gravity and pH seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable pH might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Specific Gravity previously than variable Age.'] -urinalysis_tests_boxplots.png;A set of boxplots of the variables ['Age', 'pH', 'Specific Gravity'].;['Variable pH is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable Specific Gravity.', 'Outliers seem to be a problem in the dataset.', 'Variable pH shows a high number of outlier values.', 'Variable Specific Gravity doesn’t have any outliers.', 'Variable Specific Gravity presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -urinalysis_tests_histograms_symbolic.png;A set of bar charts of the variables ['Color', 'Transparency', 'Glucose', 'Protein', 'Epithelial Cells', 'Mucous Threads', 'Amorphous Urates', 'Bacteria', 'Gender'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Transparency can be seen as ordinal.', 'The variable Mucous Threads can be seen as ordinal without losing information.', 'Considering the common semantics for Amorphous Urates and Color variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Glucose variable, dummification would be the most adequate encoding.', 'The variable Transparency can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Protein wouldn’t be useful, but the use of Color seems to be promising.', 'Given the usual semantics of Bacteria variable, dummification would have been a better codification.', 'It is better to drop the variable Mucous Threads than removing all records with missing values.', 'Not knowing the semantics of Transparency variable, dummification could have been a more adequate codification.'] -urinalysis_tests_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Color'].;['Discarding variable Color would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 40% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Color seems to be promising.', 'It is better to drop the variable Color than removing all records with missing values.'] +urinalysis_tests_pca.png;A bar chart showing the explained variance ratio of 3 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 20%.'] +urinalysis_tests_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'pH', 'Specific Gravity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables pH or Age can be discarded without losing information.', 'The variable Age can be discarded without risking losing information.', 'Variables Specific Gravity and Age seem to be useful for classification tasks.', 'Variables Age and pH are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Specific Gravity seems to be relevant for the majority of mining tasks.', 'Variables Specific Gravity and pH seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable pH might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Age previously than variable pH.'] +urinalysis_tests_boxplots.png;A set of boxplots of the variables ['Age', 'pH', 'Specific Gravity'].;['Variable pH is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Specific Gravity shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Specific Gravity shows a high number of outlier values.', 'Variable Age doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +urinalysis_tests_histograms_symbolic.png;A set of bar charts of the variables ['Color', 'Transparency', 'Glucose', 'Protein', 'Epithelial Cells', 'Mucous Threads', 'Amorphous Urates', 'Bacteria', 'Gender'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Gender can be seen as ordinal.', 'The variable Mucous Threads can be seen as ordinal without losing information.', 'Considering the common semantics for Epithelial Cells and Color variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Amorphous Urates variable, dummification would be the most adequate encoding.', 'The variable Color can be coded as ordinal without losing information.', 'Feature generation based on variable Amorphous Urates seems to be promising.', 'Feature generation based on the use of variable Protein wouldn’t be useful, but the use of Color seems to be promising.', 'Given the usual semantics of Bacteria variable, dummification would have been a better codification.', 'It is better to drop the variable Bacteria than removing all records with missing values.', 'Not knowing the semantics of Epithelial Cells variable, dummification could have been a more adequate codification.'] +urinalysis_tests_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Color'].;['Discarding variable Color would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 30% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Color seems to be promising.', 'It is better to drop the variable Color than removing all records with missing values.'] urinalysis_tests_class_histogram.png;A bar chart showing the distribution of the target variable Diagnosis.;['Balancing this dataset would be mandatory to improve the results.'] -urinalysis_tests_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -urinalysis_tests_histograms_numeric.png;A set of histograms of the variables ['Age', 'pH', 'Specific Gravity'].;['All variables, but the class, should be dealt with as date.', 'The variable Age can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable Specific Gravity shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Age shows a high number of outlier values.', 'Variable Specific Gravity doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for pH and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Specific Gravity variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable Specific Gravity seems to be promising.', 'Feature generation based on the use of variable Specific Gravity wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of pH variable, dummification would have been a better codification.', 'It is better to drop the variable Specific Gravity than removing all records with missing values.', 'Not knowing the semantics of Specific Gravity variable, dummification could have been a more adequate codification.'] -detect_dataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Ic <= 71.01 and the second with the condition Vb <= -0.37.;['It is clear that variable Vb is one of the four most relevant features.', 'The variable Vc seems to be one of the five most relevant features.', 'The variable Ib discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Vc is the second most discriminative variable regarding the class.', 'Variable Ic is one of the most relevant variables.', 'Variable Ic seems to be relevant for the majority of mining tasks.', 'Variables Vc and Ib seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is higher than 60%.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The variable Va seems to be one of the four most relevant features.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 797.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 1206.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 3.'] +urinalysis_tests_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +urinalysis_tests_histograms_numeric.png;A set of histograms of the variables ['Age', 'pH', 'Specific Gravity'].;['All variables, but the class, should be dealt with as binary.', 'The variable Specific Gravity can be seen as ordinal.', 'The variable Specific Gravity can be seen as ordinal without losing information.', 'Variable Specific Gravity is balanced.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable pH.', 'Outliers seem to be a problem in the dataset.', 'Variable Specific Gravity shows a high number of outlier values.', 'Variable Specific Gravity doesn’t have any outliers.', 'Variable Specific Gravity presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Age and pH variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Age variable, dummification would be the most adequate encoding.', 'The variable pH can be coded as ordinal without losing information.', 'Feature generation based on variable Age seems to be promising.', 'Feature generation based on the use of variable Age wouldn’t be useful, but the use of pH seems to be promising.', 'Given the usual semantics of Specific Gravity variable, dummification would have been a better codification.', 'It is better to drop the variable pH than removing all records with missing values.', 'Not knowing the semantics of Age variable, dummification could have been a more adequate codification.'] +detect_dataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Ic <= 71.01 and the second with the condition Vb <= -0.37.;['The variable Ic discriminates between the target values, as shown in the decision tree.', 'Variable Vb is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 75%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives reported in the same tree is 50.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 3.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], the Decision Tree presented classifies (A, not B) as 0.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], the Decision Tree presented classifies (A,B) as 0.'] detect_dataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -detect_dataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -detect_dataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -detect_dataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] -detect_dataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +detect_dataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] +detect_dataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] +detect_dataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] +detect_dataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] detect_dataset_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -detect_dataset_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 30%.'] -detect_dataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Ic or Ia can be discarded without losing information.', 'The variable Ia can be discarded without risking losing information.', 'Variables Vb and Ia are redundant, but we can’t say the same for the pair Va and Ic.', 'Variables Ia and Ib are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Ic seems to be relevant for the majority of mining tasks.', 'Variables Vc and Ic seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Va might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Vc previously than variable Ic.'] -detect_dataset_boxplots.png;A set of boxplots of the variables ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['Variable Vb is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Ib shows some outliers, but we can’t be sure of the same for variable Vc.', 'Outliers seem to be a problem in the dataset.', 'Variable Vb shows a high number of outlier values.', 'Variable Ia doesn’t have any outliers.', 'Variable Ia presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +detect_dataset_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 10 and 20%.'] +detect_dataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables Vc or Va can be discarded without losing information.', 'The variable Ic can be discarded without risking losing information.', 'Variables Ia and Ic are redundant, but we can’t say the same for the pair Vc and Vb.', 'Variables Ib and Vc are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Vb seems to be relevant for the majority of mining tasks.', 'Variables Ib and Ic seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Ic might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Ic previously than variable Va.'] +detect_dataset_boxplots.png;A set of boxplots of the variables ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['Variable Vb is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Vb shows some outliers, but we can’t be sure of the same for variable Va.', 'Outliers seem to be a problem in the dataset.', 'Variable Vb shows some outlier values.', 'Variable Vb doesn’t have any outliers.', 'Variable Ia presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] detect_dataset_class_histogram.png;A bar chart showing the distribution of the target variable Output.;['Balancing this dataset would be mandatory to improve the results.'] -detect_dataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -detect_dataset_histograms_numeric.png;A set of histograms of the variables ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Ic can be seen as ordinal.', 'The variable Ib can be seen as ordinal without losing information.', 'Variable Va is balanced.', 'It is clear that variable Vb shows some outliers, but we can’t be sure of the same for variable Vc.', 'Outliers seem to be a problem in the dataset.', 'Variable Ib shows some outlier values.', 'Variable Ic doesn’t have any outliers.', 'Variable Vc presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Vc and Ia variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Ic variable, dummification would be the most adequate encoding.', 'The variable Ia can be coded as ordinal without losing information.', 'Feature generation based on variable Vb seems to be promising.', 'Feature generation based on the use of variable Vb wouldn’t be useful, but the use of Ia seems to be promising.', 'Given the usual semantics of Ic variable, dummification would have been a better codification.', 'It is better to drop the variable Ic than removing all records with missing values.', 'Not knowing the semantics of Va variable, dummification could have been a more adequate codification.'] -diabetes_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition BMI <= 29.85 and the second with the condition Age <= 27.5.;['It is clear that variable Glucose is one of the five most relevant features.', 'The variable Glucose seems to be one of the three most relevant features.', 'The variable Insulin discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Age is the second most discriminative variable regarding the class.', 'Variable DiabetesPedigreeFunction is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Age and DiabetesPedigreeFunction seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is higher than 75%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of False Negatives is lower than the number of False Positives for the presented tree.', 'The variable Insulin seems to be one of the three most relevant features.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as 1.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 161.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 167.'] -diabetes_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -diabetes_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -diabetes_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -diabetes_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -diabetes_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] +detect_dataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +detect_dataset_histograms_numeric.png;A set of histograms of the variables ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['All variables, but the class, should be dealt with as date.', 'The variable Ic can be seen as ordinal.', 'The variable Vc can be seen as ordinal without losing information.', 'Variable Ia is balanced.', 'It is clear that variable Va shows some outliers, but we can’t be sure of the same for variable Vc.', 'Outliers seem to be a problem in the dataset.', 'Variable Ia shows a high number of outlier values.', 'Variable Ic doesn’t have any outliers.', 'Variable Ic presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Ia and Ib variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Vc variable, dummification would be the most adequate encoding.', 'The variable Vb can be coded as ordinal without losing information.', 'Feature generation based on variable Vb seems to be promising.', 'Feature generation based on the use of variable Ic wouldn’t be useful, but the use of Ia seems to be promising.', 'Given the usual semantics of Ib variable, dummification would have been a better codification.', 'It is better to drop the variable Ia than removing all records with missing values.', 'Not knowing the semantics of Ia variable, dummification could have been a more adequate codification.'] +diabetes_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition BMI <= 29.85 and the second with the condition Age <= 27.5.;['The variable BMI discriminates between the target values, as shown in the decision tree.', 'Variable BMI is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is higher than 60%.', 'The number of True Positives reported in the same tree is 30.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The accuracy for the presented tree is higher than its specificity.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], the Decision Tree presented classifies (not A, not B) as 1.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], the Decision Tree presented classifies (not A, not B) as 1.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 98.'] +diabetes_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +diabetes_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] +diabetes_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +diabetes_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] +diabetes_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] diabetes_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -diabetes_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 15 and 20%.'] -diabetes_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['The intrinsic dimensionality of this dataset is 7.', 'One of the variables Age or Insulin can be discarded without losing information.', 'The variable Glucose can be discarded without risking losing information.', 'Variables Pregnancies and BMI are redundant, but we can’t say the same for the pair SkinThickness and Glucose.', 'Variables BloodPressure and DiabetesPedigreeFunction are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Glucose seems to be relevant for the majority of mining tasks.', 'Variables Age and DiabetesPedigreeFunction seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Insulin might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Pregnancies previously than variable Insulin.'] -diabetes_boxplots.png;A set of boxplots of the variables ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['Variable Age is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable DiabetesPedigreeFunction shows some outliers, but we can’t be sure of the same for variable BloodPressure.', 'Outliers seem to be a problem in the dataset.', 'Variable Pregnancies shows some outlier values.', 'Variable Insulin doesn’t have any outliers.', 'Variable BloodPressure presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +diabetes_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 20%.'] +diabetes_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables Age or Insulin can be discarded without losing information.', 'The variable DiabetesPedigreeFunction can be discarded without risking losing information.', 'Variables Age and SkinThickness are redundant, but we can’t say the same for the pair BMI and BloodPressure.', 'Variables DiabetesPedigreeFunction and Age are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable SkinThickness seems to be relevant for the majority of mining tasks.', 'Variables Insulin and Glucose seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Insulin might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable DiabetesPedigreeFunction previously than variable Pregnancies.'] +diabetes_boxplots.png;A set of boxplots of the variables ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['Variable DiabetesPedigreeFunction is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Glucose shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Pregnancies shows some outlier values.', 'Variable Insulin doesn’t have any outliers.', 'Variable BMI presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] diabetes_class_histogram.png;A bar chart showing the distribution of the target variable Outcome.;['Balancing this dataset would be mandatory to improve the results.'] -diabetes_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -diabetes_histograms_numeric.png;A set of histograms of the variables ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['All variables, but the class, should be dealt with as numeric.', 'The variable DiabetesPedigreeFunction can be seen as ordinal.', 'The variable BloodPressure can be seen as ordinal without losing information.', 'Variable Insulin is balanced.', 'It is clear that variable SkinThickness shows some outliers, but we can’t be sure of the same for variable BMI.', 'Outliers seem to be a problem in the dataset.', 'Variable DiabetesPedigreeFunction shows some outlier values.', 'Variable Insulin doesn’t have any outliers.', 'Variable DiabetesPedigreeFunction presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Age and Pregnancies variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Age variable, dummification would be the most adequate encoding.', 'The variable BloodPressure can be coded as ordinal without losing information.', 'Feature generation based on variable SkinThickness seems to be promising.', 'Feature generation based on the use of variable Glucose wouldn’t be useful, but the use of Pregnancies seems to be promising.', 'Given the usual semantics of DiabetesPedigreeFunction variable, dummification would have been a better codification.', 'It is better to drop the variable Insulin than removing all records with missing values.', 'Not knowing the semantics of DiabetesPedigreeFunction variable, dummification could have been a more adequate codification.'] -Placement_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition ssc_p <= 60.09 and the second with the condition hsc_p <= 70.24.;['It is clear that variable mba_p is one of the three most relevant features.', 'The variable mba_p seems to be one of the three most relevant features.', 'The variable degree_p discriminates between the target values, as shown in the decision tree.', 'It is possible to state that mba_p is the second most discriminative variable regarding the class.', 'Variable mba_p is one of the most relevant variables.', 'Variable ssc_p seems to be relevant for the majority of mining tasks.', 'Variables degree_p and etest_p seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 60%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'The variable etest_p seems to be one of the three most relevant features.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], it is possible to state that KNN algorithm classifies (A,B) as Not Placed for any k ≤ 16.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], the Decision Tree presented classifies (not A, B) as Not Placed.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], it is possible to state that KNN algorithm classifies (not A, B) as Placed for any k ≤ 68.'] +diabetes_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +diabetes_histograms_numeric.png;A set of histograms of the variables ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Age can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Pregnancies is balanced.', 'It is clear that variable DiabetesPedigreeFunction shows some outliers, but we can’t be sure of the same for variable Glucose.', 'Outliers seem to be a problem in the dataset.', 'Variable Age shows a high number of outlier values.', 'Variable BMI doesn’t have any outliers.', 'Variable BloodPressure presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for BloodPressure and Pregnancies variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for BMI variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable BMI seems to be promising.', 'Feature generation based on the use of variable Age wouldn’t be useful, but the use of Pregnancies seems to be promising.', 'Given the usual semantics of BMI variable, dummification would have been a better codification.', 'It is better to drop the variable BMI than removing all records with missing values.', 'Not knowing the semantics of SkinThickness variable, dummification could have been a more adequate codification.'] +Placement_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition ssc_p <= 60.09 and the second with the condition hsc_p <= 70.24.;['The variable ssc_p discriminates between the target values, as shown in the decision tree.', 'Variable hsc_p is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 90%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'The accuracy for the presented tree is higher than 75%.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], it is possible to state that KNN algorithm classifies (not A, not B) as Placed for any k ≤ 68.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], it is possible to state that KNN algorithm classifies (not A, not B) as Placed for any k ≤ 68.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], the Decision Tree presented classifies (A, not B) as Placed.'] Placement_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -Placement_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -Placement_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -Placement_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -Placement_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] +Placement_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] +Placement_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +Placement_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] +Placement_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] Placement_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Placement_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.'] -Placement_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables ssc_p or hsc_p can be discarded without losing information.', 'The variable ssc_p can be discarded without risking losing information.', 'Variables etest_p and ssc_p are redundant, but we can’t say the same for the pair mba_p and degree_p.', 'Variables hsc_p and degree_p are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable hsc_p seems to be relevant for the majority of mining tasks.', 'Variables mba_p and etest_p seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable degree_p might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable etest_p previously than variable ssc_p.'] -Placement_boxplots.png;A set of boxplots of the variables ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['Variable etest_p is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable etest_p shows some outliers, but we can’t be sure of the same for variable ssc_p.', 'Outliers seem to be a problem in the dataset.', 'Variable hsc_p shows a high number of outlier values.', 'Variable ssc_p doesn’t have any outliers.', 'Variable ssc_p presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Placement_histograms_symbolic.png;A set of bar charts of the variables ['hsc_s', 'degree_t', 'gender', 'ssc_b', 'hsc_b', 'workex', 'specialisation'].;['All variables, but the class, should be dealt with as numeric.', 'The variable ssc_b can be seen as ordinal.', 'The variable workex can be seen as ordinal without losing information.', 'Considering the common semantics for workex and hsc_s variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for workex variable, dummification would be the most adequate encoding.', 'The variable hsc_s can be coded as ordinal without losing information.', 'Feature generation based on variable hsc_s seems to be promising.', 'Feature generation based on the use of variable gender wouldn’t be useful, but the use of hsc_s seems to be promising.', 'Given the usual semantics of hsc_s variable, dummification would have been a better codification.', 'It is better to drop the variable specialisation than removing all records with missing values.', 'Not knowing the semantics of workex variable, dummification could have been a more adequate codification.'] +Placement_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 30%.'] +Placement_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables hsc_p or mba_p can be discarded without losing information.', 'The variable mba_p can be discarded without risking losing information.', 'Variables hsc_p and ssc_p are redundant, but we can’t say the same for the pair degree_p and etest_p.', 'Variables hsc_p and etest_p are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable ssc_p seems to be relevant for the majority of mining tasks.', 'Variables hsc_p and degree_p seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable degree_p might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable hsc_p previously than variable mba_p.'] +Placement_boxplots.png;A set of boxplots of the variables ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['Variable etest_p is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable mba_p shows some outliers, but we can’t be sure of the same for variable ssc_p.', 'Outliers seem to be a problem in the dataset.', 'Variable hsc_p shows some outlier values.', 'Variable hsc_p doesn’t have any outliers.', 'Variable hsc_p presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Placement_histograms_symbolic.png;A set of bar charts of the variables ['hsc_s', 'degree_t', 'gender', 'ssc_b', 'hsc_b', 'workex', 'specialisation'].;['All variables, but the class, should be dealt with as numeric.', 'The variable degree_t can be seen as ordinal.', 'The variable specialisation can be seen as ordinal without losing information.', 'Considering the common semantics for specialisation and hsc_s variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for specialisation variable, dummification would be the most adequate encoding.', 'The variable ssc_b can be coded as ordinal without losing information.', 'Feature generation based on variable hsc_s seems to be promising.', 'Feature generation based on the use of variable hsc_s wouldn’t be useful, but the use of degree_t seems to be promising.', 'Given the usual semantics of hsc_s variable, dummification would have been a better codification.', 'It is better to drop the variable ssc_b than removing all records with missing values.', 'Not knowing the semantics of hsc_b variable, dummification could have been a more adequate codification.'] Placement_class_histogram.png;A bar chart showing the distribution of the target variable status.;['Balancing this dataset would be mandatory to improve the results.'] Placement_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Placement_histograms_numeric.png;A set of histograms of the variables ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['All variables, but the class, should be dealt with as binary.', 'The variable mba_p can be seen as ordinal.', 'The variable hsc_p can be seen as ordinal without losing information.', 'Variable etest_p is balanced.', 'It is clear that variable ssc_p shows some outliers, but we can’t be sure of the same for variable etest_p.', 'Outliers seem to be a problem in the dataset.', 'Variable mba_p shows a high number of outlier values.', 'Variable etest_p doesn’t have any outliers.', 'Variable mba_p presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for ssc_p and hsc_p variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for mba_p variable, dummification would be the most adequate encoding.', 'The variable degree_p can be coded as ordinal without losing information.', 'Feature generation based on variable etest_p seems to be promising.', 'Feature generation based on the use of variable mba_p wouldn’t be useful, but the use of ssc_p seems to be promising.', 'Given the usual semantics of degree_p variable, dummification would have been a better codification.', 'It is better to drop the variable hsc_p than removing all records with missing values.', 'Not knowing the semantics of mba_p variable, dummification could have been a more adequate codification.'] -Liver_Patient_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Alkphos <= 211.5 and the second with the condition Sgot <= 26.5.;['It is clear that variable ALB is one of the four most relevant features.', 'The variable AG_Ratio seems to be one of the five most relevant features.', 'The variable TB discriminates between the target values, as shown in the decision tree.', 'It is possible to state that TP is the second most discriminative variable regarding the class.', 'Variable TP is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables ALB and Age seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 60%.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The recall for the presented tree is lower than 90%.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 77.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as 1.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], the Decision Tree presented classifies (not A, not B) as 2.'] +Placement_histograms_numeric.png;A set of histograms of the variables ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['All variables, but the class, should be dealt with as numeric.', 'The variable etest_p can be seen as ordinal.', 'The variable mba_p can be seen as ordinal without losing information.', 'Variable degree_p is balanced.', 'It is clear that variable mba_p shows some outliers, but we can’t be sure of the same for variable hsc_p.', 'Outliers seem to be a problem in the dataset.', 'Variable mba_p shows some outlier values.', 'Variable ssc_p doesn’t have any outliers.', 'Variable degree_p presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for ssc_p and hsc_p variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for ssc_p variable, dummification would be the most adequate encoding.', 'The variable degree_p can be coded as ordinal without losing information.', 'Feature generation based on variable ssc_p seems to be promising.', 'Feature generation based on the use of variable etest_p wouldn’t be useful, but the use of ssc_p seems to be promising.', 'Given the usual semantics of degree_p variable, dummification would have been a better codification.', 'It is better to drop the variable etest_p than removing all records with missing values.', 'Not knowing the semantics of hsc_p variable, dummification could have been a more adequate codification.'] +Liver_Patient_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Alkphos <= 211.5 and the second with the condition Sgot <= 26.5.;['The variable Sgot discriminates between the target values, as shown in the decision tree.', 'Variable Alkphos is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 90%.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The precision for the presented tree is higher than its recall.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 1.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], it is possible to state that KNN algorithm classifies (not A, not B) as 2 for any k ≤ 94.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], the Decision Tree presented classifies (not A, B) as 1.'] Liver_Patient_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -Liver_Patient_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -Liver_Patient_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -Liver_Patient_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -Liver_Patient_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +Liver_Patient_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] +Liver_Patient_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +Liver_Patient_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] +Liver_Patient_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] Liver_Patient_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Liver_Patient_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 15 and 30%.'] -Liver_Patient_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables TP or Sgpt can be discarded without losing information.', 'The variable DB can be discarded without risking losing information.', 'Variables AG_Ratio and TP are redundant, but we can’t say the same for the pair Sgot and Alkphos.', 'Variables Sgot and TB are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable TB seems to be relevant for the majority of mining tasks.', 'Variables Age and Sgpt seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable DB might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable TB previously than variable AG_Ratio.'] -Liver_Patient_boxplots.png;A set of boxplots of the variables ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['Variable TP is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable TB shows some outliers, but we can’t be sure of the same for variable ALB.', 'Outliers seem to be a problem in the dataset.', 'Variable TB shows a high number of outlier values.', 'Variable AG_Ratio doesn’t have any outliers.', 'Variable Sgot presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Liver_Patient_histograms_symbolic.png;A set of bar charts of the variables ['Gender'].;['All variables, but the class, should be dealt with as binary.', 'The variable Gender can be seen as ordinal.', 'The variable Gender can be seen as ordinal without losing information.', 'Considering the common semantics for Gender and variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Gender variable, dummification would be the most adequate encoding.', 'The variable Gender can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable Gender than removing all records with missing values.', 'Not knowing the semantics of Gender variable, dummification could have been a more adequate codification.'] +Liver_Patient_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 10 and 25%.'] +Liver_Patient_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['The intrinsic dimensionality of this dataset is 8.', 'One of the variables ALB or DB can be discarded without losing information.', 'The variable AG_Ratio can be discarded without risking losing information.', 'Variables AG_Ratio and DB are redundant, but we can’t say the same for the pair Sgpt and Sgot.', 'Variables Sgpt and AG_Ratio are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Sgpt seems to be relevant for the majority of mining tasks.', 'Variables Age and DB seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable DB might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable DB previously than variable TB.'] +Liver_Patient_boxplots.png;A set of boxplots of the variables ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['Variable ALB is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Sgpt shows some outliers, but we can’t be sure of the same for variable TP.', 'Outliers seem to be a problem in the dataset.', 'Variable Sgot shows a high number of outlier values.', 'Variable TP doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Liver_Patient_histograms_symbolic.png;A set of bar charts of the variables ['Gender'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Gender can be seen as ordinal.', 'The variable Gender can be seen as ordinal without losing information.', 'Considering the common semantics for Gender and variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Gender variable, dummification would be the most adequate encoding.', 'The variable Gender can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable Gender than removing all records with missing values.', 'Not knowing the semantics of Gender variable, dummification could have been a more adequate codification.'] Liver_Patient_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['AG_Ratio'].;['Discarding variable AG_Ratio would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 30% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable AG_Ratio seems to be promising.', 'It is better to drop the variable AG_Ratio than removing all records with missing values.'] Liver_Patient_class_histogram.png;A bar chart showing the distribution of the target variable Selector.;['Balancing this dataset would be mandatory to improve the results.'] Liver_Patient_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Liver_Patient_histograms_numeric.png;A set of histograms of the variables ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['All variables, but the class, should be dealt with as binary.', 'The variable Sgpt can be seen as ordinal.', 'The variable Alkphos can be seen as ordinal without losing information.', 'Variable Sgpt is balanced.', 'It is clear that variable ALB shows some outliers, but we can’t be sure of the same for variable DB.', 'Outliers seem to be a problem in the dataset.', 'Variable AG_Ratio shows some outlier values.', 'Variable AG_Ratio doesn’t have any outliers.', 'Variable TB presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for AG_Ratio and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Sgpt variable, dummification would be the most adequate encoding.', 'The variable TB can be coded as ordinal without losing information.', 'Feature generation based on variable Age seems to be promising.', 'Feature generation based on the use of variable Alkphos wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of Alkphos variable, dummification would have been a better codification.', 'It is better to drop the variable AG_Ratio than removing all records with missing values.', 'Not knowing the semantics of ALB variable, dummification could have been a more adequate codification.'] -Hotel_Reservations_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition lead_time <= 151.5 and the second with the condition no_of_special_requests <= 2.5.;['It is clear that variable no_of_special_requests is one of the five most relevant features.', 'The variable no_of_weekend_nights seems to be one of the two most relevant features.', 'The variable no_of_weekend_nights discriminates between the target values, as shown in the decision tree.', 'It is possible to state that no_of_children is the first most discriminative variable regarding the class.', 'Variable no_of_children is one of the most relevant variables.', 'Variable avg_price_per_room seems to be relevant for the majority of mining tasks.', 'Variables no_of_weekend_nights and no_of_adults seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is lower than 75%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of True Positives is lower than the number of True Negatives for the presented tree.', 'The specificity for the presented tree is lower than its accuracy.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], the Decision Tree presented classifies (not A, not B) as Canceled.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], the Decision Tree presented classifies (A, not B) as Canceled.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], it is possible to state that KNN algorithm classifies (A,B) as Canceled for any k ≤ 9756.'] +Liver_Patient_histograms_numeric.png;A set of histograms of the variables ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['All variables, but the class, should be dealt with as binary.', 'The variable ALB can be seen as ordinal.', 'The variable AG_Ratio can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable Sgot shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable ALB shows a high number of outlier values.', 'Variable DB doesn’t have any outliers.', 'Variable Alkphos presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Age and TB variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for TB variable, dummification would be the most adequate encoding.', 'The variable AG_Ratio can be coded as ordinal without losing information.', 'Feature generation based on variable ALB seems to be promising.', 'Feature generation based on the use of variable Sgpt wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of Alkphos variable, dummification would have been a better codification.', 'It is better to drop the variable AG_Ratio than removing all records with missing values.', 'Not knowing the semantics of AG_Ratio variable, dummification could have been a more adequate codification.'] +Hotel_Reservations_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition lead_time <= 151.5 and the second with the condition no_of_special_requests <= 2.5.;['The variable lead_time discriminates between the target values, as shown in the decision tree.', 'Variable no_of_special_requests is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is lower than 75%.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The variable lead_time discriminates between the target values, as shown in the decision tree.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], it is possible to state that KNN algorithm classifies (A, not B) as Canceled for any k ≤ 4955.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], it is possible to state that KNN algorithm classifies (A,B) as Not_Canceled for any k ≤ 10612.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], it is possible to state that KNN algorithm classifies (A,B) as Canceled for any k ≤ 9756.'] Hotel_Reservations_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -Hotel_Reservations_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -Hotel_Reservations_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -Hotel_Reservations_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -Hotel_Reservations_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +Hotel_Reservations_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] +Hotel_Reservations_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] +Hotel_Reservations_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] +Hotel_Reservations_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] Hotel_Reservations_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Hotel_Reservations_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 30%.'] -Hotel_Reservations_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['The intrinsic dimensionality of this dataset is 7.', 'One of the variables no_of_children or arrival_date can be discarded without losing information.', 'The variable avg_price_per_room can be discarded without risking losing information.', 'Variables no_of_adults and no_of_special_requests are redundant, but we can’t say the same for the pair no_of_children and lead_time.', 'Variables no_of_week_nights and no_of_weekend_nights are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable no_of_week_nights seems to be relevant for the majority of mining tasks.', 'Variables no_of_special_requests and no_of_week_nights seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable no_of_special_requests might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable no_of_special_requests previously than variable no_of_children.'] -Hotel_Reservations_boxplots.png;A set of boxplots of the variables ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['Variable no_of_adults is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable no_of_weekend_nights shows some outliers, but we can’t be sure of the same for variable no_of_children.', 'Outliers seem to be a problem in the dataset.', 'Variable no_of_special_requests shows a high number of outlier values.', 'Variable no_of_week_nights doesn’t have any outliers.', 'Variable arrival_date presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Hotel_Reservations_histograms_symbolic.png;A set of bar charts of the variables ['type_of_meal_plan', 'room_type_reserved', 'required_car_parking_space', 'arrival_year', 'repeated_guest'].;['All variables, but the class, should be dealt with as binary.', 'The variable required_car_parking_space can be seen as ordinal.', 'The variable repeated_guest can be seen as ordinal without losing information.', 'Considering the common semantics for repeated_guest and type_of_meal_plan variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for room_type_reserved variable, dummification would be the most adequate encoding.', 'The variable type_of_meal_plan can be coded as ordinal without losing information.', 'Feature generation based on variable room_type_reserved seems to be promising.', 'Feature generation based on the use of variable arrival_year wouldn’t be useful, but the use of type_of_meal_plan seems to be promising.', 'Given the usual semantics of type_of_meal_plan variable, dummification would have been a better codification.', 'It is better to drop the variable repeated_guest than removing all records with missing values.', 'Not knowing the semantics of arrival_year variable, dummification could have been a more adequate codification.'] +Hotel_Reservations_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 10 and 20%.'] +Hotel_Reservations_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables arrival_month or no_of_special_requests can be discarded without losing information.', 'The variable no_of_adults can be discarded without risking losing information.', 'Variables no_of_adults and arrival_month are redundant, but we can’t say the same for the pair no_of_week_nights and no_of_weekend_nights.', 'Variables no_of_adults and no_of_week_nights are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable arrival_month seems to be relevant for the majority of mining tasks.', 'Variables arrival_month and no_of_adults seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable no_of_adults might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable arrival_date previously than variable no_of_week_nights.'] +Hotel_Reservations_boxplots.png;A set of boxplots of the variables ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['Variable arrival_date is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable no_of_weekend_nights shows some outliers, but we can’t be sure of the same for variable lead_time.', 'Outliers seem to be a problem in the dataset.', 'Variable no_of_week_nights shows a high number of outlier values.', 'Variable no_of_week_nights doesn’t have any outliers.', 'Variable avg_price_per_room presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Hotel_Reservations_histograms_symbolic.png;A set of bar charts of the variables ['type_of_meal_plan', 'room_type_reserved', 'required_car_parking_space', 'arrival_year', 'repeated_guest'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable room_type_reserved can be seen as ordinal.', 'The variable type_of_meal_plan can be seen as ordinal without losing information.', 'Considering the common semantics for arrival_year and type_of_meal_plan variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for type_of_meal_plan variable, dummification would be the most adequate encoding.', 'The variable type_of_meal_plan can be coded as ordinal without losing information.', 'Feature generation based on variable arrival_year seems to be promising.', 'Feature generation based on the use of variable required_car_parking_space wouldn’t be useful, but the use of type_of_meal_plan seems to be promising.', 'Given the usual semantics of required_car_parking_space variable, dummification would have been a better codification.', 'It is better to drop the variable required_car_parking_space than removing all records with missing values.', 'Not knowing the semantics of arrival_year variable, dummification could have been a more adequate codification.'] Hotel_Reservations_class_histogram.png;A bar chart showing the distribution of the target variable booking_status.;['Balancing this dataset would be mandatory to improve the results.'] -Hotel_Reservations_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Hotel_Reservations_histograms_numeric.png;A set of histograms of the variables ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['All variables, but the class, should be dealt with as date.', 'The variable arrival_date can be seen as ordinal.', 'The variable lead_time can be seen as ordinal without losing information.', 'Variable arrival_date is balanced.', 'It is clear that variable no_of_week_nights shows some outliers, but we can’t be sure of the same for variable lead_time.', 'Outliers seem to be a problem in the dataset.', 'Variable no_of_adults shows some outlier values.', 'Variable no_of_weekend_nights doesn’t have any outliers.', 'Variable avg_price_per_room presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for avg_price_per_room and no_of_adults variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for no_of_children variable, dummification would be the most adequate encoding.', 'The variable no_of_children can be coded as ordinal without losing information.', 'Feature generation based on variable no_of_adults seems to be promising.', 'Feature generation based on the use of variable no_of_adults wouldn’t be useful, but the use of no_of_children seems to be promising.', 'Given the usual semantics of no_of_children variable, dummification would have been a better codification.', 'It is better to drop the variable no_of_children than removing all records with missing values.', 'Not knowing the semantics of no_of_special_requests variable, dummification could have been a more adequate codification.'] -StressLevelDataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition basic_needs <= 3.5 and the second with the condition bullying <= 1.5.;['It is clear that variable self_esteem is one of the four most relevant features.', 'The variable self_esteem seems to be one of the three most relevant features.', 'The variable living_conditions discriminates between the target values, as shown in the decision tree.', 'It is possible to state that headache is the second most discriminative variable regarding the class.', 'Variable headache is one of the most relevant variables.', 'Variable bullying seems to be relevant for the majority of mining tasks.', 'Variables headache and depression seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is higher than 90%.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'The number of False Negatives is higher than the number of True Negatives for the presented tree.', 'The number of False Negatives reported in the same tree is 50.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], it is possible to state that KNN algorithm classifies (A, not B) as 2 for any k ≤ 271.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 1.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], it is possible to state that KNN algorithm classifies (not A, not B) as 2 for any k ≤ 271.'] -StressLevelDataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] +Hotel_Reservations_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +Hotel_Reservations_histograms_numeric.png;A set of histograms of the variables ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['All variables, but the class, should be dealt with as date.', 'The variable arrival_date can be seen as ordinal.', 'The variable no_of_children can be seen as ordinal without losing information.', 'Variable no_of_children is balanced.', 'It is clear that variable no_of_special_requests shows some outliers, but we can’t be sure of the same for variable avg_price_per_room.', 'Outliers seem to be a problem in the dataset.', 'Variable arrival_date shows a high number of outlier values.', 'Variable no_of_adults doesn’t have any outliers.', 'Variable no_of_weekend_nights presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for arrival_date and no_of_adults variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for no_of_special_requests variable, dummification would be the most adequate encoding.', 'The variable avg_price_per_room can be coded as ordinal without losing information.', 'Feature generation based on variable no_of_special_requests seems to be promising.', 'Feature generation based on the use of variable no_of_week_nights wouldn’t be useful, but the use of no_of_adults seems to be promising.', 'Given the usual semantics of no_of_adults variable, dummification would have been a better codification.', 'It is better to drop the variable arrival_date than removing all records with missing values.', 'Not knowing the semantics of no_of_week_nights variable, dummification could have been a more adequate codification.'] +StressLevelDataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition basic_needs <= 3.5 and the second with the condition bullying <= 1.5.;['The variable bullying discriminates between the target values, as shown in the decision tree.', 'Variable basic_needs is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is higher than 60%.', 'The number of False Positives reported in the same tree is 30.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The variable basic_needs seems to be one of the four most relevant features.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], the Decision Tree presented classifies (A, not B) as 2.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 271.'] +StressLevelDataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] StressLevelDataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -StressLevelDataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -StressLevelDataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] -StressLevelDataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] -StressLevelDataset_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 9 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 15 and 30%.'] -StressLevelDataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables depression or basic_needs can be discarded without losing information.', 'The variable breathing_problem can be discarded without risking losing information.', 'Variables bullying and study_load are redundant, but we can’t say the same for the pair breathing_problem and living_conditions.', 'Variables headache and living_conditions are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable living_conditions seems to be relevant for the majority of mining tasks.', 'Variables sleep_quality and self_esteem seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable self_esteem might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable study_load previously than variable depression.'] -StressLevelDataset_boxplots.png;A set of boxplots of the variables ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['Variable headache is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable self_esteem shows some outliers, but we can’t be sure of the same for variable living_conditions.', 'Outliers seem to be a problem in the dataset.', 'Variable self_esteem shows a high number of outlier values.', 'Variable bullying doesn’t have any outliers.', 'Variable sleep_quality presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -StressLevelDataset_histograms_symbolic.png;A set of bar charts of the variables ['mental_health_history'].;['All variables, but the class, should be dealt with as binary.', 'The variable mental_health_history can be seen as ordinal.', 'The variable mental_health_history can be seen as ordinal without losing information.', 'Considering the common semantics for mental_health_history and variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for mental_health_history variable, dummification would be the most adequate encoding.', 'The variable mental_health_history can be coded as ordinal without losing information.', 'Feature generation based on variable mental_health_history seems to be promising.', 'Feature generation based on the use of variable mental_health_history wouldn’t be useful, but the use of seems to be promising.', 'Given the usual semantics of mental_health_history variable, dummification would have been a better codification.', 'It is better to drop the variable mental_health_history than removing all records with missing values.', 'Not knowing the semantics of mental_health_history variable, dummification could have been a more adequate codification.'] +StressLevelDataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +StressLevelDataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] +StressLevelDataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] +StressLevelDataset_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 25%.'] +StressLevelDataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables headache or bullying can be discarded without losing information.', 'The variable breathing_problem can be discarded without risking losing information.', 'Variables anxiety_level and bullying are redundant, but we can’t say the same for the pair study_load and living_conditions.', 'Variables bullying and depression are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable breathing_problem seems to be relevant for the majority of mining tasks.', 'Variables living_conditions and breathing_problem seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable basic_needs might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable basic_needs previously than variable self_esteem.'] +StressLevelDataset_boxplots.png;A set of boxplots of the variables ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['Variable study_load is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable self_esteem shows some outliers, but we can’t be sure of the same for variable anxiety_level.', 'Outliers seem to be a problem in the dataset.', 'Variable basic_needs shows some outlier values.', 'Variable headache doesn’t have any outliers.', 'Variable depression presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +StressLevelDataset_histograms_symbolic.png;A set of bar charts of the variables ['mental_health_history'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable mental_health_history can be seen as ordinal.', 'The variable mental_health_history can be seen as ordinal without losing information.', 'Considering the common semantics for mental_health_history and variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for mental_health_history variable, dummification would be the most adequate encoding.', 'The variable mental_health_history can be coded as ordinal without losing information.', 'Feature generation based on variable mental_health_history seems to be promising.', 'Feature generation based on the use of variable mental_health_history wouldn’t be useful, but the use of seems to be promising.', 'Given the usual semantics of mental_health_history variable, dummification would have been a better codification.', 'It is better to drop the variable mental_health_history than removing all records with missing values.', 'Not knowing the semantics of mental_health_history variable, dummification could have been a more adequate codification.'] StressLevelDataset_class_histogram.png;A bar chart showing the distribution of the target variable stress_level.;['Balancing this dataset would be mandatory to improve the results.'] -StressLevelDataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -StressLevelDataset_histograms_numeric.png;A set of histograms of the variables ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['All variables, but the class, should be dealt with as date.', 'The variable sleep_quality can be seen as ordinal.', 'The variable sleep_quality can be seen as ordinal without losing information.', 'Variable sleep_quality is balanced.', 'It is clear that variable living_conditions shows some outliers, but we can’t be sure of the same for variable breathing_problem.', 'Outliers seem to be a problem in the dataset.', 'Variable basic_needs shows a high number of outlier values.', 'Variable headache doesn’t have any outliers.', 'Variable breathing_problem presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for sleep_quality and anxiety_level variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for study_load variable, dummification would be the most adequate encoding.', 'The variable anxiety_level can be coded as ordinal without losing information.', 'Feature generation based on variable living_conditions seems to be promising.', 'Feature generation based on the use of variable breathing_problem wouldn’t be useful, but the use of anxiety_level seems to be promising.', 'Given the usual semantics of self_esteem variable, dummification would have been a better codification.', 'It is better to drop the variable bullying than removing all records with missing values.', 'Not knowing the semantics of sleep_quality variable, dummification could have been a more adequate codification.'] -WineQT_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition density <= 1.0 and the second with the condition chlorides <= 0.08.;['It is clear that variable residual sugar is one of the four most relevant features.', 'The variable pH seems to be one of the three most relevant features.', 'The variable residual sugar discriminates between the target values, as shown in the decision tree.', 'It is possible to state that alcohol is the second most discriminative variable regarding the class.', 'Variable total sulfur dioxide is one of the most relevant variables.', 'Variable sulphates seems to be relevant for the majority of mining tasks.', 'Variables pH and sulphates seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 90%.', 'The number of False Positives reported in the same tree is 10.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The variable free sulfur dioxide seems to be one of the five most relevant features.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that KNN algorithm classifies (A, not B) as 8 for any k ≤ 154.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that Naive Bayes algorithm classifies (not A, B), as 5.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], the Decision Tree presented classifies (not A, not B) as 3.'] +StressLevelDataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +StressLevelDataset_histograms_numeric.png;A set of histograms of the variables ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable living_conditions can be seen as ordinal.', 'The variable breathing_problem can be seen as ordinal without losing information.', 'Variable breathing_problem is balanced.', 'It is clear that variable depression shows some outliers, but we can’t be sure of the same for variable study_load.', 'Outliers seem to be a problem in the dataset.', 'Variable bullying shows a high number of outlier values.', 'Variable headache doesn’t have any outliers.', 'Variable anxiety_level presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for sleep_quality and anxiety_level variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for headache variable, dummification would be the most adequate encoding.', 'The variable breathing_problem can be coded as ordinal without losing information.', 'Feature generation based on variable self_esteem seems to be promising.', 'Feature generation based on the use of variable anxiety_level wouldn’t be useful, but the use of self_esteem seems to be promising.', 'Given the usual semantics of study_load variable, dummification would have been a better codification.', 'It is better to drop the variable depression than removing all records with missing values.', 'Not knowing the semantics of basic_needs variable, dummification could have been a more adequate codification.'] +WineQT_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition density <= 1.0 and the second with the condition chlorides <= 0.08.;['The variable chlorides discriminates between the target values, as shown in the decision tree.', 'Variable density is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 75%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The number of False Positives reported in the same tree is 10.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that KNN algorithm classifies (not A, not B) as 6 for any k ≤ 447.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 3.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that KNN algorithm classifies (not A, not B) as 5 for any k ≤ 172.'] WineQT_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] WineQT_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -WineQT_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -WineQT_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -WineQT_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] -WineQT_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.'] -WineQT_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables sulphates or free sulfur dioxide can be discarded without losing information.', 'The variable density can be discarded without risking losing information.', 'Variables fixed acidity and citric acid are redundant, but we can’t say the same for the pair free sulfur dioxide and density.', 'Variables fixed acidity and free sulfur dioxide are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable total sulfur dioxide seems to be relevant for the majority of mining tasks.', 'Variables chlorides and volatile acidity seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable fixed acidity might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable residual sugar previously than variable free sulfur dioxide.'] -WineQT_boxplots.png;A set of boxplots of the variables ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['Variable free sulfur dioxide is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable free sulfur dioxide shows some outliers, but we can’t be sure of the same for variable citric acid.', 'Outliers seem to be a problem in the dataset.', 'Variable total sulfur dioxide shows some outlier values.', 'Variable pH doesn’t have any outliers.', 'Variable alcohol presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +WineQT_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] +WineQT_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] +WineQT_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] +WineQT_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 8 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 15 and 25%.'] +WineQT_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables citric acid or residual sugar can be discarded without losing information.', 'The variable chlorides can be discarded without risking losing information.', 'Variables sulphates and pH are redundant, but we can’t say the same for the pair free sulfur dioxide and volatile acidity.', 'Variables free sulfur dioxide and total sulfur dioxide are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable volatile acidity seems to be relevant for the majority of mining tasks.', 'Variables chlorides and citric acid seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable fixed acidity might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable pH previously than variable chlorides.'] +WineQT_boxplots.png;A set of boxplots of the variables ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['Variable citric acid is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable pH shows some outliers, but we can’t be sure of the same for variable volatile acidity.', 'Outliers seem to be a problem in the dataset.', 'Variable free sulfur dioxide shows a high number of outlier values.', 'Variable chlorides doesn’t have any outliers.', 'Variable total sulfur dioxide presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] WineQT_class_histogram.png;A bar chart showing the distribution of the target variable quality.;['Balancing this dataset would be mandatory to improve the results.'] -WineQT_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -WineQT_histograms_numeric.png;A set of histograms of the variables ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['All variables, but the class, should be dealt with as numeric.', 'The variable chlorides can be seen as ordinal.', 'The variable citric acid can be seen as ordinal without losing information.', 'Variable free sulfur dioxide is balanced.', 'It is clear that variable residual sugar shows some outliers, but we can’t be sure of the same for variable alcohol.', 'Outliers seem to be a problem in the dataset.', 'Variable residual sugar shows some outlier values.', 'Variable chlorides doesn’t have any outliers.', 'Variable density presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for total sulfur dioxide and fixed acidity variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for sulphates variable, dummification would be the most adequate encoding.', 'The variable pH can be coded as ordinal without losing information.', 'Feature generation based on variable density seems to be promising.', 'Feature generation based on the use of variable alcohol wouldn’t be useful, but the use of fixed acidity seems to be promising.', 'Given the usual semantics of sulphates variable, dummification would have been a better codification.', 'It is better to drop the variable volatile acidity than removing all records with missing values.', 'Not knowing the semantics of density variable, dummification could have been a more adequate codification.'] -loan_data_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Loan_Amount_Term <= 420.0 and the second with the condition ApplicantIncome <= 1519.0.;['It is clear that variable ApplicantIncome is one of the four most relevant features.', 'The variable Loan_Amount_Term seems to be one of the five most relevant features.', 'The variable ApplicantIncome discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Loan_Amount_Term is the first most discriminative variable regarding the class.', 'Variable LoanAmount is one of the most relevant variables.', 'Variable Loan_Amount_Term seems to be relevant for the majority of mining tasks.', 'Variables LoanAmount and Loan_Amount_Term seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is lower than 60%.', 'The number of False Positives reported in the same tree is 30.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The specificity for the presented tree is lower than 90%.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that Naive Bayes algorithm classifies (not A, B), as Y.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], the Decision Tree presented classifies (A,B) as N.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that KNN algorithm classifies (not A, B) as N for any k ≤ 3.'] -loan_data_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -loan_data_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -loan_data_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -loan_data_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -loan_data_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +WineQT_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +WineQT_histograms_numeric.png;A set of histograms of the variables ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['All variables, but the class, should be dealt with as numeric.', 'The variable fixed acidity can be seen as ordinal.', 'The variable pH can be seen as ordinal without losing information.', 'Variable free sulfur dioxide is balanced.', 'It is clear that variable alcohol shows some outliers, but we can’t be sure of the same for variable sulphates.', 'Outliers seem to be a problem in the dataset.', 'Variable sulphates shows a high number of outlier values.', 'Variable pH doesn’t have any outliers.', 'Variable citric acid presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for citric acid and fixed acidity variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for citric acid variable, dummification would be the most adequate encoding.', 'The variable pH can be coded as ordinal without losing information.', 'Feature generation based on variable density seems to be promising.', 'Feature generation based on the use of variable sulphates wouldn’t be useful, but the use of fixed acidity seems to be promising.', 'Given the usual semantics of citric acid variable, dummification would have been a better codification.', 'It is better to drop the variable free sulfur dioxide than removing all records with missing values.', 'Not knowing the semantics of pH variable, dummification could have been a more adequate codification.'] +loan_data_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Loan_Amount_Term <= 420.0 and the second with the condition ApplicantIncome <= 1519.0.;['The variable ApplicantIncome discriminates between the target values, as shown in the decision tree.', 'Variable ApplicantIncome is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is lower than 90%.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The number of False Negatives is higher than the number of True Positives for the presented tree.', 'The recall for the presented tree is higher than its accuracy.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that KNN algorithm classifies (not A, not B) as Y for any k ≤ 3.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that KNN algorithm classifies (not A, B) as N for any k ≤ 204.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as N.'] +loan_data_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +loan_data_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] +loan_data_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +loan_data_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] +loan_data_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] loan_data_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -loan_data_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 10 and 25%.'] -loan_data_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables ApplicantIncome or LoanAmount can be discarded without losing information.', 'The variable ApplicantIncome can be discarded without risking losing information.', 'Variables Loan_Amount_Term and ApplicantIncome are redundant.', 'Variables LoanAmount and CoapplicantIncome are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable LoanAmount seems to be relevant for the majority of mining tasks.', 'Variables ApplicantIncome and Loan_Amount_Term seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable CoapplicantIncome might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable ApplicantIncome previously than variable CoapplicantIncome.'] -loan_data_boxplots.png;A set of boxplots of the variables ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['Variable CoapplicantIncome is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable LoanAmount shows some outliers, but we can’t be sure of the same for variable Loan_Amount_Term.', 'Outliers seem to be a problem in the dataset.', 'Variable LoanAmount shows a high number of outlier values.', 'Variable LoanAmount doesn’t have any outliers.', 'Variable ApplicantIncome presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -loan_data_histograms_symbolic.png;A set of bar charts of the variables ['Dependents', 'Property_Area', 'Gender', 'Married', 'Education', 'Self_Employed', 'Credit_History'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Gender can be seen as ordinal.', 'The variable Gender can be seen as ordinal without losing information.', 'Considering the common semantics for Married and Dependents variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Education variable, dummification would be the most adequate encoding.', 'The variable Gender can be coded as ordinal without losing information.', 'Feature generation based on variable Education seems to be promising.', 'Feature generation based on the use of variable Credit_History wouldn’t be useful, but the use of Dependents seems to be promising.', 'Given the usual semantics of Education variable, dummification would have been a better codification.', 'It is better to drop the variable Married than removing all records with missing values.', 'Not knowing the semantics of Property_Area variable, dummification could have been a more adequate codification.'] -loan_data_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Gender', 'Dependents', 'Self_Employed', 'Loan_Amount_Term', 'Credit_History'].;['Discarding variable Dependents would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Credit_History seems to be promising.', 'It is better to drop the variable Gender than removing all records with missing values.'] +loan_data_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 20%.'] +loan_data_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables CoapplicantIncome or ApplicantIncome can be discarded without losing information.', 'The variable CoapplicantIncome can be discarded without risking losing information.', 'Variables ApplicantIncome and LoanAmount seem to be useful for classification tasks.', 'Variables Loan_Amount_Term and CoapplicantIncome are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable ApplicantIncome seems to be relevant for the majority of mining tasks.', 'Variables CoapplicantIncome and ApplicantIncome seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable LoanAmount might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable CoapplicantIncome previously than variable Loan_Amount_Term.'] +loan_data_boxplots.png;A set of boxplots of the variables ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['Variable Loan_Amount_Term is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Loan_Amount_Term shows some outliers, but we can’t be sure of the same for variable ApplicantIncome.', 'Outliers seem to be a problem in the dataset.', 'Variable ApplicantIncome shows a high number of outlier values.', 'Variable Loan_Amount_Term doesn’t have any outliers.', 'Variable ApplicantIncome presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +loan_data_histograms_symbolic.png;A set of bar charts of the variables ['Dependents', 'Property_Area', 'Gender', 'Married', 'Education', 'Self_Employed', 'Credit_History'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Credit_History can be seen as ordinal.', 'The variable Married can be seen as ordinal without losing information.', 'Considering the common semantics for Credit_History and Dependents variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Property_Area variable, dummification would be the most adequate encoding.', 'The variable Dependents can be coded as ordinal without losing information.', 'Feature generation based on variable Dependents seems to be promising.', 'Feature generation based on the use of variable Self_Employed wouldn’t be useful, but the use of Dependents seems to be promising.', 'Given the usual semantics of Education variable, dummification would have been a better codification.', 'It is better to drop the variable Property_Area than removing all records with missing values.', 'Not knowing the semantics of Dependents variable, dummification could have been a more adequate codification.'] +loan_data_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Gender', 'Dependents', 'Self_Employed', 'Loan_Amount_Term', 'Credit_History'].;['Discarding variable Gender would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Dependents seems to be promising.', 'It is better to drop the variable Self_Employed than removing all records with missing values.'] loan_data_class_histogram.png;A bar chart showing the distribution of the target variable Loan_Status.;['Balancing this dataset would be mandatory to improve the results.'] -loan_data_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -loan_data_histograms_numeric.png;A set of histograms of the variables ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['All variables, but the class, should be dealt with as numeric.', 'The variable CoapplicantIncome can be seen as ordinal.', 'The variable Loan_Amount_Term can be seen as ordinal without losing information.', 'Variable CoapplicantIncome is balanced.', 'It is clear that variable ApplicantIncome shows some outliers, but we can’t be sure of the same for variable Loan_Amount_Term.', 'Outliers seem to be a problem in the dataset.', 'Variable Loan_Amount_Term shows some outlier values.', 'Variable ApplicantIncome doesn’t have any outliers.', 'Variable Loan_Amount_Term presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Loan_Amount_Term and ApplicantIncome variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for ApplicantIncome variable, dummification would be the most adequate encoding.', 'The variable CoapplicantIncome can be coded as ordinal without losing information.', 'Feature generation based on variable CoapplicantIncome seems to be promising.', 'Feature generation based on the use of variable ApplicantIncome wouldn’t be useful, but the use of CoapplicantIncome seems to be promising.', 'Given the usual semantics of Loan_Amount_Term variable, dummification would have been a better codification.', 'It is better to drop the variable CoapplicantIncome than removing all records with missing values.', 'Not knowing the semantics of Loan_Amount_Term variable, dummification could have been a more adequate codification.'] -Dry_Bean_Dataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Area <= 39172.5 and the second with the condition AspectRation <= 1.86.;['It is clear that variable ShapeFactor1 is one of the five most relevant features.', 'The variable Extent seems to be one of the three most relevant features.', 'The variable EquivDiameter discriminates between the target values, as shown in the decision tree.', 'It is possible to state that ShapeFactor1 is the second most discriminative variable regarding the class.', 'Variable AspectRation is one of the most relevant variables.', 'Variable Perimeter seems to be relevant for the majority of mining tasks.', 'Variables Solidity and EquivDiameter seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is lower than 60%.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'The number of False Positives is lower than the number of True Positives for the presented tree.', 'The precision for the presented tree is lower than 90%.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], it is possible to state that KNN algorithm classifies (not A, not B) as SEKER for any k ≤ 2501.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], it is possible to state that KNN algorithm classifies (not A, not B) as SEKER for any k ≤ 4982.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], the Decision Tree presented classifies (A,B) as HOROZ.'] -Dry_Bean_Dataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -Dry_Bean_Dataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] +loan_data_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +loan_data_histograms_numeric.png;A set of histograms of the variables ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['All variables, but the class, should be dealt with as date.', 'The variable Loan_Amount_Term can be seen as ordinal.', 'The variable CoapplicantIncome can be seen as ordinal without losing information.', 'Variable LoanAmount is balanced.', 'It is clear that variable LoanAmount shows some outliers, but we can’t be sure of the same for variable Loan_Amount_Term.', 'Outliers seem to be a problem in the dataset.', 'Variable Loan_Amount_Term shows a high number of outlier values.', 'Variable Loan_Amount_Term doesn’t have any outliers.', 'Variable LoanAmount presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for LoanAmount and ApplicantIncome variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for LoanAmount variable, dummification would be the most adequate encoding.', 'The variable LoanAmount can be coded as ordinal without losing information.', 'Feature generation based on variable LoanAmount seems to be promising.', 'Feature generation based on the use of variable CoapplicantIncome wouldn’t be useful, but the use of ApplicantIncome seems to be promising.', 'Given the usual semantics of ApplicantIncome variable, dummification would have been a better codification.', 'It is better to drop the variable ApplicantIncome than removing all records with missing values.', 'Not knowing the semantics of Loan_Amount_Term variable, dummification could have been a more adequate codification.'] +Dry_Bean_Dataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Area <= 39172.5 and the second with the condition AspectRation <= 1.86.;['The variable Area discriminates between the target values, as shown in the decision tree.', 'Variable AspectRation is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The precision for the presented tree is higher than its specificity.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], it is possible to state that KNN algorithm classifies (not A, not B) as SEKER for any k ≤ 1284.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], the Decision Tree presented classifies (not A, B) as BOMBAY.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], it is possible to state that KNN algorithm classifies (A,B) as DERMASON for any k ≤ 2501.'] +Dry_Bean_Dataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] +Dry_Bean_Dataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] Dry_Bean_Dataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -Dry_Bean_Dataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -Dry_Bean_Dataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -Dry_Bean_Dataset_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 5 and 25%.'] -Dry_Bean_Dataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables Extent or Area can be discarded without losing information.', 'The variable Solidity can be discarded without risking losing information.', 'Variables roundness and Perimeter are redundant, but we can’t say the same for the pair MinorAxisLength and Eccentricity.', 'Variables MinorAxisLength and Eccentricity are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Extent seems to be relevant for the majority of mining tasks.', 'Variables ShapeFactor1 and Area seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable EquivDiameter might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Eccentricity previously than variable ShapeFactor1.'] -Dry_Bean_Dataset_boxplots.png;A set of boxplots of the variables ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['Variable ShapeFactor1 is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Area shows some outliers, but we can’t be sure of the same for variable Perimeter.', 'Outliers seem to be a problem in the dataset.', 'Variable AspectRation shows a high number of outlier values.', 'Variable Extent doesn’t have any outliers.', 'Variable Solidity presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Dry_Bean_Dataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] +Dry_Bean_Dataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] +Dry_Bean_Dataset_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.'] +Dry_Bean_Dataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['The intrinsic dimensionality of this dataset is 9.', 'One of the variables MinorAxisLength or Eccentricity can be discarded without losing information.', 'The variable Eccentricity can be discarded without risking losing information.', 'Variables MinorAxisLength and Solidity are redundant, but we can’t say the same for the pair ShapeFactor1 and Extent.', 'Variables roundness and ShapeFactor1 are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable ShapeFactor1 seems to be relevant for the majority of mining tasks.', 'Variables Perimeter and Eccentricity seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Solidity might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Eccentricity previously than variable EquivDiameter.'] +Dry_Bean_Dataset_boxplots.png;A set of boxplots of the variables ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['Variable MinorAxisLength is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Solidity shows some outliers, but we can’t be sure of the same for variable EquivDiameter.', 'Outliers seem to be a problem in the dataset.', 'Variable Solidity shows some outlier values.', 'Variable roundness doesn’t have any outliers.', 'Variable Eccentricity presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Dry_Bean_Dataset_class_histogram.png;A bar chart showing the distribution of the target variable Class.;['Balancing this dataset would be mandatory to improve the results.'] -Dry_Bean_Dataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Dry_Bean_Dataset_histograms_numeric.png;A set of histograms of the variables ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['All variables, but the class, should be dealt with as date.', 'The variable Solidity can be seen as ordinal.', 'The variable Area can be seen as ordinal without losing information.', 'Variable Solidity is balanced.', 'It is clear that variable MinorAxisLength shows some outliers, but we can’t be sure of the same for variable Solidity.', 'Outliers seem to be a problem in the dataset.', 'Variable MinorAxisLength shows some outlier values.', 'Variable MinorAxisLength doesn’t have any outliers.', 'Variable Perimeter presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for MinorAxisLength and Area variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Area variable, dummification would be the most adequate encoding.', 'The variable ShapeFactor1 can be coded as ordinal without losing information.', 'Feature generation based on variable AspectRation seems to be promising.', 'Feature generation based on the use of variable Eccentricity wouldn’t be useful, but the use of Area seems to be promising.', 'Given the usual semantics of Solidity variable, dummification would have been a better codification.', 'It is better to drop the variable Area than removing all records with missing values.', 'Not knowing the semantics of Area variable, dummification could have been a more adequate codification.'] -credit_customers_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition existing_credits <= 1.5 and the second with the condition residence_since <= 3.5.;['It is clear that variable age is one of the five most relevant features.', 'The variable installment_commitment seems to be one of the five most relevant features.', 'The variable credit_amount discriminates between the target values, as shown in the decision tree.', 'It is possible to state that age is the second most discriminative variable regarding the class.', 'Variable duration is one of the most relevant variables.', 'Variable age seems to be relevant for the majority of mining tasks.', 'Variables credit_amount and age seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 60%.', 'The number of True Negatives reported in the same tree is 50.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The recall for the presented tree is higher than its accuracy.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as good for any k ≤ 264.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as good for any k ≤ 183.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as bad for any k ≤ 146.'] +Dry_Bean_Dataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +Dry_Bean_Dataset_histograms_numeric.png;A set of histograms of the variables ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['All variables, but the class, should be dealt with as date.', 'The variable Perimeter can be seen as ordinal.', 'The variable Extent can be seen as ordinal without losing information.', 'Variable Solidity is balanced.', 'It is clear that variable EquivDiameter shows some outliers, but we can’t be sure of the same for variable MinorAxisLength.', 'Outliers seem to be a problem in the dataset.', 'Variable Area shows some outlier values.', 'Variable roundness doesn’t have any outliers.', 'Variable Solidity presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for AspectRation and Area variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for EquivDiameter variable, dummification would be the most adequate encoding.', 'The variable roundness can be coded as ordinal without losing information.', 'Feature generation based on variable EquivDiameter seems to be promising.', 'Feature generation based on the use of variable MinorAxisLength wouldn’t be useful, but the use of Area seems to be promising.', 'Given the usual semantics of roundness variable, dummification would have been a better codification.', 'It is better to drop the variable Solidity than removing all records with missing values.', 'Not knowing the semantics of Perimeter variable, dummification could have been a more adequate codification.'] +credit_customers_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition existing_credits <= 1.5 and the second with the condition residence_since <= 3.5.;['The variable residence_since discriminates between the target values, as shown in the decision tree.', 'Variable residence_since is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 90%.', 'The number of False Negatives is lower than the number of True Positives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The accuracy for the presented tree is higher than its recall.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as bad for any k ≤ 107.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as bad.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as bad for any k ≤ 264.'] credit_customers_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] credit_customers_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -credit_customers_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -credit_customers_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -credit_customers_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] +credit_customers_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +credit_customers_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] +credit_customers_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] credit_customers_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -credit_customers_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 5 and 25%.'] -credit_customers_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables age or existing_credits can be discarded without losing information.', 'The variable existing_credits can be discarded without risking losing information.', 'Variables residence_since and installment_commitment are redundant, but we can’t say the same for the pair credit_amount and age.', 'Variables existing_credits and residence_since are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable installment_commitment seems to be relevant for the majority of mining tasks.', 'Variables installment_commitment and residence_since seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable existing_credits might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable installment_commitment previously than variable duration.'] -credit_customers_boxplots.png;A set of boxplots of the variables ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['Variable existing_credits is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable duration shows some outliers, but we can’t be sure of the same for variable credit_amount.', 'Outliers seem to be a problem in the dataset.', 'Variable installment_commitment shows some outlier values.', 'Variable residence_since doesn’t have any outliers.', 'Variable residence_since presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -credit_customers_histograms_symbolic.png;A set of bar charts of the variables ['checking_status', 'employment', 'other_parties', 'other_payment_plans', 'housing', 'num_dependents', 'own_telephone', 'foreign_worker'].;['All variables, but the class, should be dealt with as numeric.', 'The variable other_payment_plans can be seen as ordinal.', 'The variable num_dependents can be seen as ordinal without losing information.', 'Considering the common semantics for housing and checking_status variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for num_dependents variable, dummification would be the most adequate encoding.', 'The variable foreign_worker can be coded as ordinal without losing information.', 'Feature generation based on variable foreign_worker seems to be promising.', 'Feature generation based on the use of variable employment wouldn’t be useful, but the use of checking_status seems to be promising.', 'Given the usual semantics of foreign_worker variable, dummification would have been a better codification.', 'It is better to drop the variable employment than removing all records with missing values.', 'Not knowing the semantics of checking_status variable, dummification could have been a more adequate codification.'] +credit_customers_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 5 and 20%.'] +credit_customers_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables age or credit_amount can be discarded without losing information.', 'The variable existing_credits can be discarded without risking losing information.', 'Variables existing_credits and credit_amount are redundant, but we can’t say the same for the pair duration and installment_commitment.', 'Variables residence_since and existing_credits are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable age seems to be relevant for the majority of mining tasks.', 'Variables age and installment_commitment seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable credit_amount might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable existing_credits previously than variable credit_amount.'] +credit_customers_boxplots.png;A set of boxplots of the variables ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['Variable existing_credits is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable age shows some outliers, but we can’t be sure of the same for variable residence_since.', 'Outliers seem to be a problem in the dataset.', 'Variable age shows some outlier values.', 'Variable residence_since doesn’t have any outliers.', 'Variable age presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +credit_customers_histograms_symbolic.png;A set of bar charts of the variables ['checking_status', 'employment', 'other_parties', 'other_payment_plans', 'housing', 'num_dependents', 'own_telephone', 'foreign_worker'].;['All variables, but the class, should be dealt with as numeric.', 'The variable other_parties can be seen as ordinal.', 'The variable employment can be seen as ordinal without losing information.', 'Considering the common semantics for checking_status and employment variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for housing variable, dummification would be the most adequate encoding.', 'The variable checking_status can be coded as ordinal without losing information.', 'Feature generation based on variable num_dependents seems to be promising.', 'Feature generation based on the use of variable employment wouldn’t be useful, but the use of checking_status seems to be promising.', 'Given the usual semantics of own_telephone variable, dummification would have been a better codification.', 'It is better to drop the variable num_dependents than removing all records with missing values.', 'Not knowing the semantics of num_dependents variable, dummification could have been a more adequate codification.'] credit_customers_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.'] -credit_customers_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -credit_customers_histograms_numeric.png;A set of histograms of the variables ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable duration can be seen as ordinal.', 'The variable duration can be seen as ordinal without losing information.', 'Variable residence_since is balanced.', 'It is clear that variable installment_commitment shows some outliers, but we can’t be sure of the same for variable residence_since.', 'Outliers seem to be a problem in the dataset.', 'Variable duration shows some outlier values.', 'Variable age doesn’t have any outliers.', 'Variable residence_since presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for installment_commitment and duration variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for credit_amount variable, dummification would be the most adequate encoding.', 'The variable age can be coded as ordinal without losing information.', 'Feature generation based on variable credit_amount seems to be promising.', 'Feature generation based on the use of variable duration wouldn’t be useful, but the use of credit_amount seems to be promising.', 'Given the usual semantics of credit_amount variable, dummification would have been a better codification.', 'It is better to drop the variable installment_commitment than removing all records with missing values.', 'Not knowing the semantics of existing_credits variable, dummification could have been a more adequate codification.'] -weatherAUS_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Rainfall <= 0.1 and the second with the condition Pressure3pm <= 1009.65.;['It is clear that variable Cloud3pm is one of the three most relevant features.', 'The variable Temp3pm seems to be one of the two most relevant features.', 'The variable WindSpeed9am discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Cloud9am is the second most discriminative variable regarding the class.', 'Variable Pressure3pm is one of the most relevant variables.', 'Variable Cloud3pm seems to be relevant for the majority of mining tasks.', 'Variables Cloud9am and WindSpeed9am seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 75%.', 'The number of False Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The precision for the presented tree is lower than its recall.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], it is possible to state that KNN algorithm classifies (A, not B) as No for any k ≤ 1686.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], it is possible to state that KNN algorithm classifies (A, not B) as Yes for any k ≤ 1154.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], the Decision Tree presented classifies (not A, B) as Yes.'] -weatherAUS_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -weatherAUS_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -weatherAUS_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -weatherAUS_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -weatherAUS_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] +credit_customers_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +credit_customers_histograms_numeric.png;A set of histograms of the variables ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['All variables, but the class, should be dealt with as binary.', 'The variable credit_amount can be seen as ordinal.', 'The variable age can be seen as ordinal without losing information.', 'Variable duration is balanced.', 'It is clear that variable age shows some outliers, but we can’t be sure of the same for variable credit_amount.', 'Outliers seem to be a problem in the dataset.', 'Variable residence_since shows some outlier values.', 'Variable credit_amount doesn’t have any outliers.', 'Variable existing_credits presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for residence_since and duration variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for installment_commitment variable, dummification would be the most adequate encoding.', 'The variable age can be coded as ordinal without losing information.', 'Feature generation based on variable residence_since seems to be promising.', 'Feature generation based on the use of variable credit_amount wouldn’t be useful, but the use of duration seems to be promising.', 'Given the usual semantics of age variable, dummification would have been a better codification.', 'It is better to drop the variable residence_since than removing all records with missing values.', 'Not knowing the semantics of installment_commitment variable, dummification could have been a more adequate codification.'] +weatherAUS_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Rainfall <= 0.1 and the second with the condition Pressure3pm <= 1009.65.;['The variable Pressure3pm discriminates between the target values, as shown in the decision tree.', 'Variable Rainfall is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is lower than 60%.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of False Positives for the presented tree.', 'The accuracy for the presented tree is higher than 75%.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], the Decision Tree presented classifies (not A, not B) as No.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], the Decision Tree presented classifies (not A, B) as Yes.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as No.'] +weatherAUS_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] +weatherAUS_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] +weatherAUS_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] +weatherAUS_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] +weatherAUS_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] weatherAUS_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -weatherAUS_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 15 and 25%.'] -weatherAUS_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables Cloud3pm or Pressure9am can be discarded without losing information.', 'The variable Pressure3pm can be discarded without risking losing information.', 'Variables Cloud9am and Temp3pm are redundant, but we can’t say the same for the pair Rainfall and Pressure3pm.', 'Variables Rainfall and Cloud3pm are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable WindSpeed9am seems to be relevant for the majority of mining tasks.', 'Variables Cloud3pm and Rainfall seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Rainfall might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Temp3pm previously than variable Rainfall.'] -weatherAUS_boxplots.png;A set of boxplots of the variables ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['Variable Pressure9am is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Cloud9am shows some outliers, but we can’t be sure of the same for variable Cloud3pm.', 'Outliers seem to be a problem in the dataset.', 'Variable Pressure3pm shows a high number of outlier values.', 'Variable Temp3pm doesn’t have any outliers.', 'Variable Pressure3pm presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -weatherAUS_histograms_symbolic.png;A set of bar charts of the variables ['Location', 'WindGustDir', 'WindDir9am', 'WindDir3pm', 'RainToday'].;['All variables, but the class, should be dealt with as binary.', 'The variable RainToday can be seen as ordinal.', 'The variable WindDir3pm can be seen as ordinal without losing information.', 'Considering the common semantics for WindDir3pm and Location variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for WindDir9am variable, dummification would be the most adequate encoding.', 'The variable RainToday can be coded as ordinal without losing information.', 'Feature generation based on variable Location seems to be promising.', 'Feature generation based on the use of variable WindGustDir wouldn’t be useful, but the use of Location seems to be promising.', 'Given the usual semantics of WindDir9am variable, dummification would have been a better codification.', 'It is better to drop the variable WindDir9am than removing all records with missing values.', 'Not knowing the semantics of WindDir9am variable, dummification could have been a more adequate codification.'] -weatherAUS_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Rainfall', 'WindGustDir', 'WindDir9am', 'WindDir3pm', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm', 'RainToday'].;['Discarding variable RainToday would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 40% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable RainToday seems to be promising.', 'It is better to drop the variable Pressure9am than removing all records with missing values.'] +weatherAUS_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 5 and 20%.'] +weatherAUS_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables Pressure9am or Pressure3pm can be discarded without losing information.', 'The variable Pressure9am can be discarded without risking losing information.', 'Variables Rainfall and Pressure3pm are redundant, but we can’t say the same for the pair Pressure9am and Cloud3pm.', 'Variables Temp3pm and Rainfall are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Temp3pm seems to be relevant for the majority of mining tasks.', 'Variables Pressure9am and Cloud3pm seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Cloud9am might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Cloud9am previously than variable Pressure9am.'] +weatherAUS_boxplots.png;A set of boxplots of the variables ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['Variable Pressure9am is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Cloud9am shows some outliers, but we can’t be sure of the same for variable WindSpeed9am.', 'Outliers seem to be a problem in the dataset.', 'Variable Rainfall shows a high number of outlier values.', 'Variable Cloud9am doesn’t have any outliers.', 'Variable Cloud9am presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +weatherAUS_histograms_symbolic.png;A set of bar charts of the variables ['Location', 'WindGustDir', 'WindDir9am', 'WindDir3pm', 'RainToday'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable WindDir9am can be seen as ordinal.', 'The variable WindDir3pm can be seen as ordinal without losing information.', 'Considering the common semantics for Location and WindGustDir variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for WindGustDir variable, dummification would be the most adequate encoding.', 'The variable WindDir3pm can be coded as ordinal without losing information.', 'Feature generation based on variable WindDir3pm seems to be promising.', 'Feature generation based on the use of variable WindDir3pm wouldn’t be useful, but the use of Location seems to be promising.', 'Given the usual semantics of RainToday variable, dummification would have been a better codification.', 'It is better to drop the variable Location than removing all records with missing values.', 'Not knowing the semantics of WindGustDir variable, dummification could have been a more adequate codification.'] +weatherAUS_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Rainfall', 'WindGustDir', 'WindDir9am', 'WindDir3pm', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm', 'RainToday'].;['Discarding variable Pressure9am would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable RainToday seems to be promising.', 'It is better to drop the variable Cloud9am than removing all records with missing values.'] weatherAUS_class_histogram.png;A bar chart showing the distribution of the target variable RainTomorrow.;['Balancing this dataset would be mandatory to improve the results.'] weatherAUS_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -weatherAUS_histograms_numeric.png;A set of histograms of the variables ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['All variables, but the class, should be dealt with as binary.', 'The variable Pressure3pm can be seen as ordinal.', 'The variable Pressure3pm can be seen as ordinal without losing information.', 'Variable WindSpeed9am is balanced.', 'It is clear that variable Rainfall shows some outliers, but we can’t be sure of the same for variable Pressure3pm.', 'Outliers seem to be a problem in the dataset.', 'Variable Pressure9am shows a high number of outlier values.', 'Variable Rainfall doesn’t have any outliers.', 'Variable Cloud9am presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Rainfall and WindSpeed9am variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for WindSpeed9am variable, dummification would be the most adequate encoding.', 'The variable Pressure3pm can be coded as ordinal without losing information.', 'Feature generation based on variable Rainfall seems to be promising.', 'Feature generation based on the use of variable Pressure3pm wouldn’t be useful, but the use of Rainfall seems to be promising.', 'Given the usual semantics of Temp3pm variable, dummification would have been a better codification.', 'It is better to drop the variable Pressure9am than removing all records with missing values.', 'Not knowing the semantics of Pressure3pm variable, dummification could have been a more adequate codification.'] -car_insurance_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition displacement <= 1196.5 and the second with the condition height <= 1519.0.;['It is clear that variable length is one of the three most relevant features.', 'The variable age_of_car seems to be one of the three most relevant features.', 'The variable displacement discriminates between the target values, as shown in the decision tree.', 'It is possible to state that width is the first most discriminative variable regarding the class.', 'Variable gross_weight is one of the most relevant variables.', 'Variable airbags seems to be relevant for the majority of mining tasks.', 'Variables length and age_of_car seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 90%.', 'The number of False Negatives reported in the same tree is 10.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The specificity for the presented tree is lower than its accuracy.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 3813.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that KNN algorithm classifies (not A, not B) as 1 for any k ≤ 3813.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as 1.'] -car_insurance_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -car_insurance_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -car_insurance_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -car_insurance_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -car_insurance_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +weatherAUS_histograms_numeric.png;A set of histograms of the variables ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['All variables, but the class, should be dealt with as date.', 'The variable Rainfall can be seen as ordinal.', 'The variable Pressure3pm can be seen as ordinal without losing information.', 'Variable Cloud3pm is balanced.', 'It is clear that variable Pressure9am shows some outliers, but we can’t be sure of the same for variable Rainfall.', 'Outliers seem to be a problem in the dataset.', 'Variable Pressure9am shows some outlier values.', 'Variable Cloud3pm doesn’t have any outliers.', 'Variable Pressure9am presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Pressure3pm and Rainfall variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Rainfall variable, dummification would be the most adequate encoding.', 'The variable Cloud9am can be coded as ordinal without losing information.', 'Feature generation based on variable Temp3pm seems to be promising.', 'Feature generation based on the use of variable Cloud9am wouldn’t be useful, but the use of Rainfall seems to be promising.', 'Given the usual semantics of WindSpeed9am variable, dummification would have been a better codification.', 'It is better to drop the variable Rainfall than removing all records with missing values.', 'Not knowing the semantics of Rainfall variable, dummification could have been a more adequate codification.'] +car_insurance_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition displacement <= 1196.5 and the second with the condition height <= 1519.0.;['The variable displacement discriminates between the target values, as shown in the decision tree.', 'Variable displacement is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 60%.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 2141.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 686.'] +car_insurance_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +car_insurance_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] +car_insurance_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +car_insurance_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] +car_insurance_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] car_insurance_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -car_insurance_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 7 principal components would imply an error between 5 and 25%.'] -car_insurance_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables age_of_car or width can be discarded without losing information.', 'The variable age_of_policyholder can be discarded without risking losing information.', 'Variables gross_weight and length are redundant, but we can’t say the same for the pair policy_tenure and displacement.', 'Variables policy_tenure and age_of_car are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable age_of_car seems to be relevant for the majority of mining tasks.', 'Variables policy_tenure and age_of_car seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable width might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable age_of_policyholder previously than variable height.'] -car_insurance_boxplots.png;A set of boxplots of the variables ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['Variable width is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable age_of_policyholder shows some outliers, but we can’t be sure of the same for variable height.', 'Outliers seem to be a problem in the dataset.', 'Variable policy_tenure shows some outlier values.', 'Variable displacement doesn’t have any outliers.', 'Variable policy_tenure presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -car_insurance_histograms_symbolic.png;A set of bar charts of the variables ['area_cluster', 'segment', 'model', 'fuel_type', 'max_torque', 'max_power', 'steering_type', 'is_esc', 'is_adjustable_steering'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable is_esc can be seen as ordinal.', 'The variable fuel_type can be seen as ordinal without losing information.', 'Considering the common semantics for max_power and area_cluster variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for max_torque variable, dummification would be the most adequate encoding.', 'The variable model can be coded as ordinal without losing information.', 'Feature generation based on variable steering_type seems to be promising.', 'Feature generation based on the use of variable fuel_type wouldn’t be useful, but the use of area_cluster seems to be promising.', 'Given the usual semantics of model variable, dummification would have been a better codification.', 'It is better to drop the variable steering_type than removing all records with missing values.', 'Not knowing the semantics of max_power variable, dummification could have been a more adequate codification.'] +car_insurance_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 5 and 25%.'] +car_insurance_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables age_of_car or airbags can be discarded without losing information.', 'The variable length can be discarded without risking losing information.', 'Variables age_of_car and policy_tenure are redundant, but we can’t say the same for the pair height and length.', 'Variables age_of_car and gross_weight are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable height seems to be relevant for the majority of mining tasks.', 'Variables gross_weight and width seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable length might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable length previously than variable gross_weight.'] +car_insurance_boxplots.png;A set of boxplots of the variables ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['Variable height is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable displacement shows some outliers, but we can’t be sure of the same for variable policy_tenure.', 'Outliers seem to be a problem in the dataset.', 'Variable airbags shows some outlier values.', 'Variable width doesn’t have any outliers.', 'Variable length presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +car_insurance_histograms_symbolic.png;A set of bar charts of the variables ['area_cluster', 'segment', 'model', 'fuel_type', 'max_torque', 'max_power', 'steering_type', 'is_esc', 'is_adjustable_steering'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable segment can be seen as ordinal.', 'The variable is_esc can be seen as ordinal without losing information.', 'Considering the common semantics for segment and area_cluster variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for max_torque variable, dummification would be the most adequate encoding.', 'The variable max_torque can be coded as ordinal without losing information.', 'Feature generation based on variable area_cluster seems to be promising.', 'Feature generation based on the use of variable steering_type wouldn’t be useful, but the use of area_cluster seems to be promising.', 'Given the usual semantics of model variable, dummification would have been a better codification.', 'It is better to drop the variable steering_type than removing all records with missing values.', 'Not knowing the semantics of is_esc variable, dummification could have been a more adequate codification.'] car_insurance_class_histogram.png;A bar chart showing the distribution of the target variable is_claim.;['Balancing this dataset would be mandatory to improve the results.'] car_insurance_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -car_insurance_histograms_numeric.png;A set of histograms of the variables ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable age_of_policyholder can be seen as ordinal.', 'The variable width can be seen as ordinal without losing information.', 'Variable age_of_policyholder is balanced.', 'It is clear that variable policy_tenure shows some outliers, but we can’t be sure of the same for variable height.', 'Outliers seem to be a problem in the dataset.', 'Variable age_of_car shows some outlier values.', 'Variable airbags doesn’t have any outliers.', 'Variable gross_weight presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for width and policy_tenure variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for gross_weight variable, dummification would be the most adequate encoding.', 'The variable length can be coded as ordinal without losing information.', 'Feature generation based on variable width seems to be promising.', 'Feature generation based on the use of variable gross_weight wouldn’t be useful, but the use of policy_tenure seems to be promising.', 'Given the usual semantics of width variable, dummification would have been a better codification.', 'It is better to drop the variable height than removing all records with missing values.', 'Not knowing the semantics of policy_tenure variable, dummification could have been a more adequate codification.'] -heart_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition slope <= 1.5 and the second with the condition restecg <= 0.5.;['It is clear that variable thal is one of the four most relevant features.', 'The variable thal seems to be one of the four most relevant features.', 'The variable trestbps discriminates between the target values, as shown in the decision tree.', 'It is possible to state that thal is the second most discriminative variable regarding the class.', 'Variable oldpeak is one of the most relevant variables.', 'Variable restecg seems to be relevant for the majority of mining tasks.', 'Variables slope and chol seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is lower than 60%.', 'The number of True Negatives reported in the same tree is 50.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The recall for the presented tree is higher than its specificity.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 0.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 202.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], the Decision Tree presented classifies (not A, B) as 1.'] -heart_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +car_insurance_histograms_numeric.png;A set of histograms of the variables ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['All variables, but the class, should be dealt with as numeric.', 'The variable age_of_car can be seen as ordinal.', 'The variable height can be seen as ordinal without losing information.', 'Variable displacement is balanced.', 'It is clear that variable displacement shows some outliers, but we can’t be sure of the same for variable age_of_car.', 'Outliers seem to be a problem in the dataset.', 'Variable displacement shows some outlier values.', 'Variable width doesn’t have any outliers.', 'Variable height presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for displacement and policy_tenure variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for length variable, dummification would be the most adequate encoding.', 'The variable age_of_car can be coded as ordinal without losing information.', 'Feature generation based on variable height seems to be promising.', 'Feature generation based on the use of variable age_of_car wouldn’t be useful, but the use of policy_tenure seems to be promising.', 'Given the usual semantics of age_of_policyholder variable, dummification would have been a better codification.', 'It is better to drop the variable gross_weight than removing all records with missing values.', 'Not knowing the semantics of displacement variable, dummification could have been a more adequate codification.'] +heart_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition slope <= 1.5 and the second with the condition restecg <= 0.5.;['The variable slope discriminates between the target values, as shown in the decision tree.', 'Variable slope is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is lower than 75%.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'The precision for the presented tree is lower than its specificity.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as 0.'] +heart_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] heart_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -heart_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -heart_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -heart_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +heart_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +heart_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] +heart_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] heart_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -heart_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 10 and 30%.'] -heart_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables restecg or thalach can be discarded without losing information.', 'The variable trestbps can be discarded without risking losing information.', 'Variables thalach and slope are redundant.', 'Variables restecg and thal are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable thalach seems to be relevant for the majority of mining tasks.', 'Variables slope and age seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable trestbps might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable cp previously than variable ca.'] -heart_boxplots.png;A set of boxplots of the variables ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['Variable ca is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable ca shows some outliers, but we can’t be sure of the same for variable restecg.', 'Outliers seem to be a problem in the dataset.', 'Variable restecg shows a high number of outlier values.', 'Variable thal doesn’t have any outliers.', 'Variable ca presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -heart_histograms_symbolic.png;A set of bar charts of the variables ['sex', 'fbs', 'exang'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable fbs can be seen as ordinal.', 'The variable exang can be seen as ordinal without losing information.', 'Considering the common semantics for fbs and sex variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for exang variable, dummification would be the most adequate encoding.', 'The variable fbs can be coded as ordinal without losing information.', 'Feature generation based on variable exang seems to be promising.', 'Feature generation based on the use of variable sex wouldn’t be useful, but the use of fbs seems to be promising.', 'Given the usual semantics of sex variable, dummification would have been a better codification.', 'It is better to drop the variable sex than removing all records with missing values.', 'Not knowing the semantics of sex variable, dummification could have been a more adequate codification.'] +heart_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 9 principal components would imply an error between 15 and 20%.'] +heart_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables restecg or age can be discarded without losing information.', 'The variable trestbps can be discarded without risking losing information.', 'Variables cp and age are redundant, but we can’t say the same for the pair ca and trestbps.', 'Variables restecg and oldpeak are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable thalach seems to be relevant for the majority of mining tasks.', 'Variables cp and chol seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable age might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable restecg previously than variable slope.'] +heart_boxplots.png;A set of boxplots of the variables ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['Variable thal is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable trestbps shows some outliers, but we can’t be sure of the same for variable restecg.', 'Outliers seem to be a problem in the dataset.', 'Variable chol shows some outlier values.', 'Variable restecg doesn’t have any outliers.', 'Variable restecg presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +heart_histograms_symbolic.png;A set of bar charts of the variables ['sex', 'fbs', 'exang'].;['All variables, but the class, should be dealt with as numeric.', 'The variable sex can be seen as ordinal.', 'The variable sex can be seen as ordinal without losing information.', 'Considering the common semantics for fbs and sex variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for sex variable, dummification would be the most adequate encoding.', 'The variable sex can be coded as ordinal without losing information.', 'Feature generation based on variable exang seems to be promising.', 'Feature generation based on the use of variable exang wouldn’t be useful, but the use of sex seems to be promising.', 'Given the usual semantics of sex variable, dummification would have been a better codification.', 'It is better to drop the variable exang than removing all records with missing values.', 'Not knowing the semantics of sex variable, dummification could have been a more adequate codification.'] heart_class_histogram.png;A bar chart showing the distribution of the target variable target.;['Balancing this dataset would be mandatory to improve the results.'] -heart_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -heart_histograms_numeric.png;A set of histograms of the variables ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['All variables, but the class, should be dealt with as date.', 'The variable cp can be seen as ordinal.', 'The variable thalach can be seen as ordinal without losing information.', 'Variable thalach is balanced.', 'It is clear that variable oldpeak shows some outliers, but we can’t be sure of the same for variable age.', 'Outliers seem to be a problem in the dataset.', 'Variable oldpeak shows some outlier values.', 'Variable chol doesn’t have any outliers.', 'Variable thalach presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for age and cp variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for age variable, dummification would be the most adequate encoding.', 'The variable trestbps can be coded as ordinal without losing information.', 'Feature generation based on variable restecg seems to be promising.', 'Feature generation based on the use of variable trestbps wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of chol variable, dummification would have been a better codification.', 'It is better to drop the variable oldpeak than removing all records with missing values.', 'Not knowing the semantics of thal variable, dummification could have been a more adequate codification.'] -Breast_Cancer_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition perimeter_mean <= 90.47 and the second with the condition texture_worst <= 27.89.;['It is clear that variable smoothness_se is one of the five most relevant features.', 'The variable radius_worst seems to be one of the two most relevant features.', 'The variable radius_worst discriminates between the target values, as shown in the decision tree.', 'It is possible to state that texture_worst is the second most discriminative variable regarding the class.', 'Variable perimeter_worst is one of the most relevant variables.', 'Variable texture_worst seems to be relevant for the majority of mining tasks.', 'Variables area_se and perimeter_worst seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is lower than 60%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that KNN algorithm classifies (A,B) as B for any k ≤ 20.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that KNN algorithm classifies (A,B) as B for any k ≤ 20.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], the Decision Tree presented classifies (not A, B) as M.'] -Breast_Cancer_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -Breast_Cancer_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -Breast_Cancer_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -Breast_Cancer_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] -Breast_Cancer_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] +heart_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +heart_histograms_numeric.png;A set of histograms of the variables ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['All variables, but the class, should be dealt with as binary.', 'The variable chol can be seen as ordinal.', 'The variable age can be seen as ordinal without losing information.', 'Variable restecg is balanced.', 'It is clear that variable chol shows some outliers, but we can’t be sure of the same for variable age.', 'Outliers seem to be a problem in the dataset.', 'Variable age shows some outlier values.', 'Variable chol doesn’t have any outliers.', 'Variable ca presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for chol and age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for restecg variable, dummification would be the most adequate encoding.', 'The variable thal can be coded as ordinal without losing information.', 'Feature generation based on variable cp seems to be promising.', 'Feature generation based on the use of variable thalach wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of restecg variable, dummification would have been a better codification.', 'It is better to drop the variable trestbps than removing all records with missing values.', 'Not knowing the semantics of trestbps variable, dummification could have been a more adequate codification.'] +Breast_Cancer_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition perimeter_mean <= 90.47 and the second with the condition texture_worst <= 27.89.;['The variable texture_worst discriminates between the target values, as shown in the decision tree.', 'Variable texture_worst is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that Naive Bayes algorithm classifies (A, not B), as M.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that Naive Bayes algorithm classifies (not A, B), as M.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that KNN algorithm classifies (not A, B) as M for any k ≤ 20.'] +Breast_Cancer_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] +Breast_Cancer_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] +Breast_Cancer_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +Breast_Cancer_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] +Breast_Cancer_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] Breast_Cancer_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Breast_Cancer_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 9 principal components would imply an error between 10 and 20%.'] -Breast_Cancer_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables perimeter_mean or symmetry_se can be discarded without losing information.', 'The variable perimeter_worst can be discarded without risking losing information.', 'Variables radius_worst and symmetry_se are redundant, but we can’t say the same for the pair perimeter_worst and perimeter_se.', 'Variables texture_mean and texture_worst are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable perimeter_mean seems to be relevant for the majority of mining tasks.', 'Variables perimeter_worst and perimeter_se seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable texture_se might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable radius_worst previously than variable area_se.'] -Breast_Cancer_boxplots.png;A set of boxplots of the variables ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['Variable radius_worst is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable radius_worst shows some outliers, but we can’t be sure of the same for variable perimeter_mean.', 'Outliers seem to be a problem in the dataset.', 'Variable texture_mean shows a high number of outlier values.', 'Variable symmetry_se doesn’t have any outliers.', 'Variable perimeter_se presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Breast_Cancer_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 10 and 30%.'] +Breast_Cancer_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables symmetry_se or area_se can be discarded without losing information.', 'The variable perimeter_worst can be discarded without risking losing information.', 'Variables texture_worst and radius_worst are redundant, but we can’t say the same for the pair perimeter_worst and texture_se.', 'Variables texture_worst and perimeter_se are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable area_se seems to be relevant for the majority of mining tasks.', 'Variables symmetry_se and perimeter_mean seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable texture_se might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable texture_mean previously than variable perimeter_se.'] +Breast_Cancer_boxplots.png;A set of boxplots of the variables ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['Variable perimeter_se is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable texture_mean shows some outliers, but we can’t be sure of the same for variable perimeter_se.', 'Outliers seem to be a problem in the dataset.', 'Variable texture_se shows a high number of outlier values.', 'Variable texture_mean doesn’t have any outliers.', 'Variable radius_worst presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Breast_Cancer_class_histogram.png;A bar chart showing the distribution of the target variable diagnosis.;['Balancing this dataset would be mandatory to improve the results.'] -Breast_Cancer_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Breast_Cancer_histograms_numeric.png;A set of histograms of the variables ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['All variables, but the class, should be dealt with as date.', 'The variable perimeter_worst can be seen as ordinal.', 'The variable texture_se can be seen as ordinal without losing information.', 'Variable perimeter_se is balanced.', 'It is clear that variable texture_worst shows some outliers, but we can’t be sure of the same for variable symmetry_se.', 'Outliers seem to be a problem in the dataset.', 'Variable perimeter_se shows a high number of outlier values.', 'Variable perimeter_worst doesn’t have any outliers.', 'Variable texture_worst presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for radius_worst and texture_mean variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for area_se variable, dummification would be the most adequate encoding.', 'The variable perimeter_se can be coded as ordinal without losing information.', 'Feature generation based on variable radius_worst seems to be promising.', 'Feature generation based on the use of variable texture_worst wouldn’t be useful, but the use of texture_mean seems to be promising.', 'Given the usual semantics of perimeter_worst variable, dummification would have been a better codification.', 'It is better to drop the variable perimeter_se than removing all records with missing values.', 'Not knowing the semantics of perimeter_mean variable, dummification could have been a more adequate codification.'] -e-commerce_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Prior_purchases <= 3.5 and the second with the condition Customer_care_calls <= 4.5.;['It is clear that variable Customer_care_calls is one of the two most relevant features.', 'The variable Customer_rating seems to be one of the three most relevant features.', 'The variable Prior_purchases discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Discount_offered is the first most discriminative variable regarding the class.', 'Variable Discount_offered is one of the most relevant variables.', 'Variable Discount_offered seems to be relevant for the majority of mining tasks.', 'Variables Cost_of_the_Product and Customer_rating seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 90%.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as No.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that KNN algorithm classifies (A,B) as No for any k ≤ 906.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as No.'] -e-commerce_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +Breast_Cancer_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +Breast_Cancer_histograms_numeric.png;A set of histograms of the variables ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['All variables, but the class, should be dealt with as numeric.', 'The variable perimeter_mean can be seen as ordinal.', 'The variable radius_worst can be seen as ordinal without losing information.', 'Variable texture_se is balanced.', 'It is clear that variable radius_worst shows some outliers, but we can’t be sure of the same for variable perimeter_worst.', 'Outliers seem to be a problem in the dataset.', 'Variable smoothness_se shows some outlier values.', 'Variable smoothness_se doesn’t have any outliers.', 'Variable texture_worst presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for perimeter_mean and texture_mean variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for area_se variable, dummification would be the most adequate encoding.', 'The variable smoothness_se can be coded as ordinal without losing information.', 'Feature generation based on variable perimeter_worst seems to be promising.', 'Feature generation based on the use of variable area_se wouldn’t be useful, but the use of texture_mean seems to be promising.', 'Given the usual semantics of perimeter_worst variable, dummification would have been a better codification.', 'It is better to drop the variable texture_mean than removing all records with missing values.', 'Not knowing the semantics of smoothness_se variable, dummification could have been a more adequate codification.'] +e-commerce_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Prior_purchases <= 3.5 and the second with the condition Customer_care_calls <= 4.5.;['The variable Prior_purchases discriminates between the target values, as shown in the decision tree.', 'Variable Prior_purchases is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is lower than 60%.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'The accuracy for the presented tree is higher than 60%.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that KNN algorithm classifies (A,B) as Yes for any k ≤ 1596.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that KNN algorithm classifies (not A, B) as No for any k ≤ 3657.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that KNN algorithm classifies (not A, B) as No for any k ≤ 1596.'] +e-commerce_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] e-commerce_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -e-commerce_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -e-commerce_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] -e-commerce_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] +e-commerce_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +e-commerce_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] +e-commerce_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] e-commerce_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -e-commerce_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 15 and 25%.'] -e-commerce_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Prior_purchases or Cost_of_the_Product can be discarded without losing information.', 'The variable Weight_in_gms can be discarded without risking losing information.', 'Variables Customer_care_calls and Prior_purchases are redundant, but we can’t say the same for the pair Cost_of_the_Product and Customer_rating.', 'Variables Customer_rating and Cost_of_the_Product are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Customer_rating seems to be relevant for the majority of mining tasks.', 'Variables Cost_of_the_Product and Prior_purchases seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Customer_care_calls might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Customer_care_calls previously than variable Weight_in_gms.'] -e-commerce_boxplots.png;A set of boxplots of the variables ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['Variable Customer_rating is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Cost_of_the_Product shows some outliers, but we can’t be sure of the same for variable Customer_rating.', 'Outliers seem to be a problem in the dataset.', 'Variable Weight_in_gms shows some outlier values.', 'Variable Customer_care_calls doesn’t have any outliers.', 'Variable Weight_in_gms presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -e-commerce_histograms_symbolic.png;A set of bar charts of the variables ['Warehouse_block', 'Mode_of_Shipment', 'Product_importance', 'Gender'].;['All variables, but the class, should be dealt with as binary.', 'The variable Gender can be seen as ordinal.', 'The variable Warehouse_block can be seen as ordinal without losing information.', 'Considering the common semantics for Mode_of_Shipment and Warehouse_block variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Gender variable, dummification would be the most adequate encoding.', 'The variable Product_importance can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Mode_of_Shipment wouldn’t be useful, but the use of Warehouse_block seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable Product_importance than removing all records with missing values.', 'Not knowing the semantics of Mode_of_Shipment variable, dummification could have been a more adequate codification.'] +e-commerce_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 10 and 25%.'] +e-commerce_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables Discount_offered or Prior_purchases can be discarded without losing information.', 'The variable Customer_rating can be discarded without risking losing information.', 'Variables Customer_care_calls and Cost_of_the_Product are redundant.', 'Variables Prior_purchases and Cost_of_the_Product are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Discount_offered seems to be relevant for the majority of mining tasks.', 'Variables Weight_in_gms and Prior_purchases seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Cost_of_the_Product might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Discount_offered previously than variable Cost_of_the_Product.'] +e-commerce_boxplots.png;A set of boxplots of the variables ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['Variable Discount_offered is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Customer_rating shows some outliers, but we can’t be sure of the same for variable Prior_purchases.', 'Outliers seem to be a problem in the dataset.', 'Variable Discount_offered shows some outlier values.', 'Variable Customer_rating doesn’t have any outliers.', 'Variable Prior_purchases presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +e-commerce_histograms_symbolic.png;A set of bar charts of the variables ['Warehouse_block', 'Mode_of_Shipment', 'Product_importance', 'Gender'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Warehouse_block can be seen as ordinal.', 'The variable Product_importance can be seen as ordinal without losing information.', 'Considering the common semantics for Mode_of_Shipment and Warehouse_block variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Mode_of_Shipment variable, dummification would be the most adequate encoding.', 'The variable Product_importance can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Warehouse_block wouldn’t be useful, but the use of Mode_of_Shipment seems to be promising.', 'Given the usual semantics of Warehouse_block variable, dummification would have been a better codification.', 'It is better to drop the variable Product_importance than removing all records with missing values.', 'Not knowing the semantics of Product_importance variable, dummification could have been a more adequate codification.'] e-commerce_class_histogram.png;A bar chart showing the distribution of the target variable ReachedOnTime.;['Balancing this dataset would be mandatory to improve the results.'] -e-commerce_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -e-commerce_histograms_numeric.png;A set of histograms of the variables ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Weight_in_gms can be seen as ordinal.', 'The variable Prior_purchases can be seen as ordinal without losing information.', 'Variable Prior_purchases is balanced.', 'It is clear that variable Prior_purchases shows some outliers, but we can’t be sure of the same for variable Customer_rating.', 'Outliers seem to be a problem in the dataset.', 'Variable Discount_offered shows some outlier values.', 'Variable Weight_in_gms doesn’t have any outliers.', 'Variable Customer_rating presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Customer_care_calls and Customer_rating variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Discount_offered variable, dummification would be the most adequate encoding.', 'The variable Prior_purchases can be coded as ordinal without losing information.', 'Feature generation based on variable Weight_in_gms seems to be promising.', 'Feature generation based on the use of variable Discount_offered wouldn’t be useful, but the use of Customer_care_calls seems to be promising.', 'Given the usual semantics of Discount_offered variable, dummification would have been a better codification.', 'It is better to drop the variable Customer_rating than removing all records with missing values.', 'Not knowing the semantics of Discount_offered variable, dummification could have been a more adequate codification.'] -maintenance_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Rotational speed [rpm] <= 1381.5 and the second with the condition Torque [Nm] <= 65.05.;['It is clear that variable Torque [Nm] is one of the two most relevant features.', 'The variable Air temperature [K] seems to be one of the five most relevant features.', 'The variable Tool wear [min] discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Process temperature [K] is the first most discriminative variable regarding the class.', 'Variable Tool wear [min] is one of the most relevant variables.', 'Variable Process temperature [K] seems to be relevant for the majority of mining tasks.', 'Variables Tool wear [min] and Rotational speed [rpm] seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is higher than 60%.', 'The number of False Positives reported in the same tree is 50.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], the Decision Tree presented classifies (A, not B) as 0.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 943.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 5990.'] -maintenance_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +e-commerce_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +e-commerce_histograms_numeric.png;A set of histograms of the variables ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['All variables, but the class, should be dealt with as date.', 'The variable Weight_in_gms can be seen as ordinal.', 'The variable Weight_in_gms can be seen as ordinal without losing information.', 'Variable Customer_care_calls is balanced.', 'It is clear that variable Discount_offered shows some outliers, but we can’t be sure of the same for variable Customer_care_calls.', 'Outliers seem to be a problem in the dataset.', 'Variable Prior_purchases shows a high number of outlier values.', 'Variable Prior_purchases doesn’t have any outliers.', 'Variable Discount_offered presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Prior_purchases and Customer_care_calls variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Customer_care_calls variable, dummification would be the most adequate encoding.', 'The variable Customer_care_calls can be coded as ordinal without losing information.', 'Feature generation based on variable Discount_offered seems to be promising.', 'Feature generation based on the use of variable Discount_offered wouldn’t be useful, but the use of Customer_care_calls seems to be promising.', 'Given the usual semantics of Discount_offered variable, dummification would have been a better codification.', 'It is better to drop the variable Discount_offered than removing all records with missing values.', 'Not knowing the semantics of Cost_of_the_Product variable, dummification could have been a more adequate codification.'] +maintenance_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Rotational speed [rpm] <= 1381.5 and the second with the condition Torque [Nm] <= 65.05.;['The variable Rotational speed [rpm] discriminates between the target values, as shown in the decision tree.', 'Variable Rotational speed [rpm] is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is lower than 60%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of True Negatives is higher than the number of False Negatives for the presented tree.', 'The number of True Negatives reported in the same tree is 50.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 5990.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (A, not B) as 1 for any k ≤ 46.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 46.'] +maintenance_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] maintenance_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -maintenance_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -maintenance_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -maintenance_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] +maintenance_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] +maintenance_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] +maintenance_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] maintenance_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -maintenance_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 30%.'] -maintenance_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables Rotational speed [rpm] or Torque [Nm] can be discarded without losing information.', 'The variable Process temperature [K] can be discarded without risking losing information.', 'Variables Air temperature [K] and Tool wear [min] are redundant.', 'Variables Rotational speed [rpm] and Torque [Nm] are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Rotational speed [rpm] seems to be relevant for the majority of mining tasks.', 'Variables Rotational speed [rpm] and Process temperature [K] seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Process temperature [K] might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Rotational speed [rpm] previously than variable Tool wear [min].'] -maintenance_boxplots.png;A set of boxplots of the variables ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['Variable Torque [Nm] is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Rotational speed [rpm] shows some outliers, but we can’t be sure of the same for variable Process temperature [K].', 'Outliers seem to be a problem in the dataset.', 'Variable Process temperature [K] shows some outlier values.', 'Variable Torque [Nm] doesn’t have any outliers.', 'Variable Process temperature [K] presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -maintenance_histograms_symbolic.png;A set of bar charts of the variables ['Type', 'TWF', 'HDF', 'PWF', 'OSF', 'RNF'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable PWF can be seen as ordinal.', 'The variable Type can be seen as ordinal without losing information.', 'Considering the common semantics for Type and TWF variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Type variable, dummification would be the most adequate encoding.', 'The variable Type can be coded as ordinal without losing information.', 'Feature generation based on variable TWF seems to be promising.', 'Feature generation based on the use of variable OSF wouldn’t be useful, but the use of Type seems to be promising.', 'Given the usual semantics of RNF variable, dummification would have been a better codification.', 'It is better to drop the variable TWF than removing all records with missing values.', 'Not knowing the semantics of PWF variable, dummification could have been a more adequate codification.'] +maintenance_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 25%.'] +maintenance_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Process temperature [K] or Torque [Nm] can be discarded without losing information.', 'The variable Rotational speed [rpm] can be discarded without risking losing information.', 'Variables Air temperature [K] and Tool wear [min] seem to be useful for classification tasks.', 'Variables Rotational speed [rpm] and Process temperature [K] are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Tool wear [min] seems to be relevant for the majority of mining tasks.', 'Variables Torque [Nm] and Tool wear [min] seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Torque [Nm] might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Rotational speed [rpm] previously than variable Torque [Nm].'] +maintenance_boxplots.png;A set of boxplots of the variables ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['Variable Process temperature [K] is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Rotational speed [rpm] shows some outliers, but we can’t be sure of the same for variable Torque [Nm].', 'Outliers seem to be a problem in the dataset.', 'Variable Tool wear [min] shows a high number of outlier values.', 'Variable Air temperature [K] doesn’t have any outliers.', 'Variable Tool wear [min] presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +maintenance_histograms_symbolic.png;A set of bar charts of the variables ['Type', 'TWF', 'HDF', 'PWF', 'OSF', 'RNF'].;['All variables, but the class, should be dealt with as date.', 'The variable TWF can be seen as ordinal.', 'The variable HDF can be seen as ordinal without losing information.', 'Considering the common semantics for PWF and Type variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Type variable, dummification would be the most adequate encoding.', 'The variable Type can be coded as ordinal without losing information.', 'Feature generation based on variable OSF seems to be promising.', 'Feature generation based on the use of variable RNF wouldn’t be useful, but the use of Type seems to be promising.', 'Given the usual semantics of OSF variable, dummification would have been a better codification.', 'It is better to drop the variable PWF than removing all records with missing values.', 'Not knowing the semantics of RNF variable, dummification could have been a more adequate codification.'] maintenance_class_histogram.png;A bar chart showing the distribution of the target variable Machine_failure.;['Balancing this dataset would be mandatory to improve the results.'] maintenance_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -maintenance_histograms_numeric.png;A set of histograms of the variables ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['All variables, but the class, should be dealt with as date.', 'The variable Process temperature [K] can be seen as ordinal.', 'The variable Air temperature [K] can be seen as ordinal without losing information.', 'Variable Air temperature [K] is balanced.', 'It is clear that variable Rotational speed [rpm] shows some outliers, but we can’t be sure of the same for variable Torque [Nm].', 'Outliers seem to be a problem in the dataset.', 'Variable Tool wear [min] shows some outlier values.', 'Variable Rotational speed [rpm] doesn’t have any outliers.', 'Variable Tool wear [min] presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Tool wear [min] and Air temperature [K] variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Air temperature [K] variable, dummification would be the most adequate encoding.', 'The variable Tool wear [min] can be coded as ordinal without losing information.', 'Feature generation based on variable Rotational speed [rpm] seems to be promising.', 'Feature generation based on the use of variable Air temperature [K] wouldn’t be useful, but the use of Process temperature [K] seems to be promising.', 'Given the usual semantics of Rotational speed [rpm] variable, dummification would have been a better codification.', 'It is better to drop the variable Rotational speed [rpm] than removing all records with missing values.', 'Not knowing the semantics of Rotational speed [rpm] variable, dummification could have been a more adequate codification.'] -Churn_Modelling_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Age <= 42.5 and the second with the condition NumOfProducts <= 2.5.;['It is clear that variable Tenure is one of the two most relevant features.', 'The variable EstimatedSalary seems to be one of the three most relevant features.', 'The variable Balance discriminates between the target values, as shown in the decision tree.', 'It is possible to state that NumOfProducts is the first most discriminative variable regarding the class.', 'Variable Tenure is one of the most relevant variables.', 'Variable CreditScore seems to be relevant for the majority of mining tasks.', 'Variables CreditScore and EstimatedSalary seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 90%.', 'The number of True Negatives reported in the same tree is 10.', 'The number of False Positives is lower than the number of True Positives for the presented tree.', 'The variable CreditScore seems to be one of the two most relevant features.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 0.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that KNN algorithm classifies (not A, not B) as 1 for any k ≤ 114.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that KNN algorithm classifies (not A, not B) as 0 for any k ≤ 1931.'] -Churn_Modelling_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -Churn_Modelling_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -Churn_Modelling_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -Churn_Modelling_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -Churn_Modelling_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +maintenance_histograms_numeric.png;A set of histograms of the variables ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Rotational speed [rpm] can be seen as ordinal.', 'The variable Air temperature [K] can be seen as ordinal without losing information.', 'Variable Rotational speed [rpm] is balanced.', 'It is clear that variable Air temperature [K] shows some outliers, but we can’t be sure of the same for variable Torque [Nm].', 'Outliers seem to be a problem in the dataset.', 'Variable Torque [Nm] shows some outlier values.', 'Variable Air temperature [K] doesn’t have any outliers.', 'Variable Process temperature [K] presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Torque [Nm] and Air temperature [K] variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Torque [Nm] variable, dummification would be the most adequate encoding.', 'The variable Rotational speed [rpm] can be coded as ordinal without losing information.', 'Feature generation based on variable Rotational speed [rpm] seems to be promising.', 'Feature generation based on the use of variable Air temperature [K] wouldn’t be useful, but the use of Process temperature [K] seems to be promising.', 'Given the usual semantics of Rotational speed [rpm] variable, dummification would have been a better codification.', 'It is better to drop the variable Process temperature [K] than removing all records with missing values.', 'Not knowing the semantics of Tool wear [min] variable, dummification could have been a more adequate codification.'] +Churn_Modelling_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Age <= 42.5 and the second with the condition NumOfProducts <= 2.5.;['The variable NumOfProducts discriminates between the target values, as shown in the decision tree.', 'Variable Age is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is lower than 90%.', 'The number of True Positives reported in the same tree is 50.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 124.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 0.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as 1.'] +Churn_Modelling_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] +Churn_Modelling_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] +Churn_Modelling_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +Churn_Modelling_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] +Churn_Modelling_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] Churn_Modelling_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Churn_Modelling_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 15 and 30%.'] -Churn_Modelling_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Balance or EstimatedSalary can be discarded without losing information.', 'The variable Tenure can be discarded without risking losing information.', 'Variables EstimatedSalary and Age are redundant, but we can’t say the same for the pair Balance and NumOfProducts.', 'Variables Age and NumOfProducts are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Tenure seems to be relevant for the majority of mining tasks.', 'Variables EstimatedSalary and Age seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Age might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Balance previously than variable Tenure.'] -Churn_Modelling_boxplots.png;A set of boxplots of the variables ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['Variable Balance is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Balance shows some outliers, but we can’t be sure of the same for variable EstimatedSalary.', 'Outliers seem to be a problem in the dataset.', 'Variable EstimatedSalary shows a high number of outlier values.', 'Variable Tenure doesn’t have any outliers.', 'Variable Balance presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Churn_Modelling_histograms_symbolic.png;A set of bar charts of the variables ['Geography', 'Gender', 'HasCrCard', 'IsActiveMember'].;['All variables, but the class, should be dealt with as date.', 'The variable IsActiveMember can be seen as ordinal.', 'The variable IsActiveMember can be seen as ordinal without losing information.', 'Considering the common semantics for Geography and Gender variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for IsActiveMember variable, dummification would be the most adequate encoding.', 'The variable Geography can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable HasCrCard wouldn’t be useful, but the use of Geography seems to be promising.', 'Given the usual semantics of HasCrCard variable, dummification would have been a better codification.', 'It is better to drop the variable IsActiveMember than removing all records with missing values.', 'Not knowing the semantics of Geography variable, dummification could have been a more adequate codification.'] +Churn_Modelling_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 10 and 25%.'] +Churn_Modelling_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables EstimatedSalary or NumOfProducts can be discarded without losing information.', 'The variable EstimatedSalary can be discarded without risking losing information.', 'Variables Age and CreditScore are redundant, but we can’t say the same for the pair Tenure and NumOfProducts.', 'Variables NumOfProducts and CreditScore are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable EstimatedSalary seems to be relevant for the majority of mining tasks.', 'Variables NumOfProducts and CreditScore seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Balance might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Tenure previously than variable CreditScore.'] +Churn_Modelling_boxplots.png;A set of boxplots of the variables ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['Variable Tenure is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Tenure shows some outliers, but we can’t be sure of the same for variable NumOfProducts.', 'Outliers seem to be a problem in the dataset.', 'Variable EstimatedSalary shows some outlier values.', 'Variable EstimatedSalary doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Churn_Modelling_histograms_symbolic.png;A set of bar charts of the variables ['Geography', 'Gender', 'HasCrCard', 'IsActiveMember'].;['All variables, but the class, should be dealt with as binary.', 'The variable IsActiveMember can be seen as ordinal.', 'The variable Gender can be seen as ordinal without losing information.', 'Considering the common semantics for Gender and Geography variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for IsActiveMember variable, dummification would be the most adequate encoding.', 'The variable IsActiveMember can be coded as ordinal without losing information.', 'Feature generation based on variable IsActiveMember seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of Geography seems to be promising.', 'Given the usual semantics of Geography variable, dummification would have been a better codification.', 'It is better to drop the variable Gender than removing all records with missing values.', 'Not knowing the semantics of Gender variable, dummification could have been a more adequate codification.'] Churn_Modelling_class_histogram.png;A bar chart showing the distribution of the target variable Exited.;['Balancing this dataset would be mandatory to improve the results.'] -Churn_Modelling_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Churn_Modelling_histograms_numeric.png;A set of histograms of the variables ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Tenure can be seen as ordinal.', 'The variable EstimatedSalary can be seen as ordinal without losing information.', 'Variable EstimatedSalary is balanced.', 'It is clear that variable Balance shows some outliers, but we can’t be sure of the same for variable CreditScore.', 'Outliers seem to be a problem in the dataset.', 'Variable NumOfProducts shows some outlier values.', 'Variable Balance doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Tenure and CreditScore variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Balance variable, dummification would be the most adequate encoding.', 'The variable NumOfProducts can be coded as ordinal without losing information.', 'Feature generation based on variable Balance seems to be promising.', 'Feature generation based on the use of variable Balance wouldn’t be useful, but the use of CreditScore seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable EstimatedSalary than removing all records with missing values.', 'Not knowing the semantics of Tenure variable, dummification could have been a more adequate codification.'] -vehicle_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition MAJORSKEWNESS <= 74.5 and the second with the condition CIRCULARITY <= 49.5.;['It is clear that variable MINORVARIANCE is one of the three most relevant features.', 'The variable MINORKURTOSIS seems to be one of the four most relevant features.', 'The variable DISTANCE CIRCULARITY discriminates between the target values, as shown in the decision tree.', 'It is possible to state that CIRCULARITY is the second most discriminative variable regarding the class.', 'Variable DISTANCE CIRCULARITY is one of the most relevant variables.', 'Variable GYRATIONRADIUS seems to be relevant for the majority of mining tasks.', 'Variables MAJORSKEWNESS and GYRATIONRADIUS seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Positives is lower than the number of True Positives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The variable MAJORVARIANCE seems to be one of the four most relevant features.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], the Decision Tree presented classifies (A,B) as 3.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], the Decision Tree presented classifies (A, not B) as 4.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], it is possible to state that KNN algorithm classifies (not A, B) as 2 for any k ≤ 1.'] -vehicle_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -vehicle_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -vehicle_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -vehicle_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -vehicle_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] -vehicle_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 5 and 20%.'] -vehicle_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables RADIUS RATIO or DISTANCE CIRCULARITY can be discarded without losing information.', 'The variable MINORKURTOSIS can be discarded without risking losing information.', 'Variables MINORVARIANCE and MAJORKURTOSIS are redundant, but we can’t say the same for the pair MAJORVARIANCE and CIRCULARITY.', 'Variables GYRATIONRADIUS and DISTANCE CIRCULARITY are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable RADIUS RATIO seems to be relevant for the majority of mining tasks.', 'Variables DISTANCE CIRCULARITY and MINORKURTOSIS seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable RADIUS RATIO might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable CIRCULARITY previously than variable COMPACTNESS.'] -vehicle_boxplots.png;A set of boxplots of the variables ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['Variable MAJORSKEWNESS is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable MAJORSKEWNESS shows some outliers, but we can’t be sure of the same for variable COMPACTNESS.', 'Outliers seem to be a problem in the dataset.', 'Variable MINORVARIANCE shows some outlier values.', 'Variable COMPACTNESS doesn’t have any outliers.', 'Variable COMPACTNESS presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Churn_Modelling_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +Churn_Modelling_histograms_numeric.png;A set of histograms of the variables ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Age can be seen as ordinal.', 'The variable EstimatedSalary can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable NumOfProducts shows some outliers, but we can’t be sure of the same for variable Tenure.', 'Outliers seem to be a problem in the dataset.', 'Variable EstimatedSalary shows some outlier values.', 'Variable Age doesn’t have any outliers.', 'Variable NumOfProducts presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for NumOfProducts and CreditScore variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Balance variable, dummification would be the most adequate encoding.', 'The variable CreditScore can be coded as ordinal without losing information.', 'Feature generation based on variable Age seems to be promising.', 'Feature generation based on the use of variable EstimatedSalary wouldn’t be useful, but the use of CreditScore seems to be promising.', 'Given the usual semantics of CreditScore variable, dummification would have been a better codification.', 'It is better to drop the variable EstimatedSalary than removing all records with missing values.', 'Not knowing the semantics of CreditScore variable, dummification could have been a more adequate codification.'] +vehicle_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition MAJORSKEWNESS <= 74.5 and the second with the condition CIRCULARITY <= 49.5.;['The variable MAJORSKEWNESS discriminates between the target values, as shown in the decision tree.', 'Variable MAJORSKEWNESS is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is lower than 90%.', 'The number of False Negatives is lower than the number of True Positives for the presented tree.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The variable MAJORSKEWNESS seems to be one of the five most relevant features.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 4.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], it is possible to state that KNN algorithm classifies (A, not B) as 2 for any k ≤ 3.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], it is possible to state that KNN algorithm classifies (A,B) as 4 for any k ≤ 3.'] +vehicle_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] +vehicle_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] +vehicle_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] +vehicle_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] +vehicle_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] +vehicle_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 10 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 5 and 20%.'] +vehicle_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables MAJORSKEWNESS or CIRCULARITY can be discarded without losing information.', 'The variable GYRATIONRADIUS can be discarded without risking losing information.', 'Variables CIRCULARITY and COMPACTNESS are redundant, but we can’t say the same for the pair MINORVARIANCE and MAJORVARIANCE.', 'Variables MINORVARIANCE and MINORKURTOSIS are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable MAJORVARIANCE seems to be relevant for the majority of mining tasks.', 'Variables MINORKURTOSIS and MINORSKEWNESS seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable MAJORKURTOSIS might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable MINORKURTOSIS previously than variable MAJORSKEWNESS.'] +vehicle_boxplots.png;A set of boxplots of the variables ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['Variable COMPACTNESS is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable MINORSKEWNESS shows some outliers, but we can’t be sure of the same for variable MINORVARIANCE.', 'Outliers seem to be a problem in the dataset.', 'Variable MINORKURTOSIS shows some outlier values.', 'Variable COMPACTNESS doesn’t have any outliers.', 'Variable CIRCULARITY presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] vehicle_class_histogram.png;A bar chart showing the distribution of the target variable target.;['Balancing this dataset would be mandatory to improve the results.'] -vehicle_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -vehicle_histograms_numeric.png;A set of histograms of the variables ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['All variables, but the class, should be dealt with as date.', 'The variable MAJORVARIANCE can be seen as ordinal.', 'The variable MINORKURTOSIS can be seen as ordinal without losing information.', 'Variable COMPACTNESS is balanced.', 'It is clear that variable COMPACTNESS shows some outliers, but we can’t be sure of the same for variable MINORSKEWNESS.', 'Outliers seem to be a problem in the dataset.', 'Variable MINORSKEWNESS shows some outlier values.', 'Variable MINORSKEWNESS doesn’t have any outliers.', 'Variable CIRCULARITY presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for GYRATIONRADIUS and COMPACTNESS variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for COMPACTNESS variable, dummification would be the most adequate encoding.', 'The variable MINORVARIANCE can be coded as ordinal without losing information.', 'Feature generation based on variable MAJORSKEWNESS seems to be promising.', 'Feature generation based on the use of variable MINORVARIANCE wouldn’t be useful, but the use of COMPACTNESS seems to be promising.', 'Given the usual semantics of RADIUS RATIO variable, dummification would have been a better codification.', 'It is better to drop the variable MINORVARIANCE than removing all records with missing values.', 'Not knowing the semantics of MAJORSKEWNESS variable, dummification could have been a more adequate codification.'] -adult_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition hours-per-week <= 41.5 and the second with the condition capital-loss <= 1820.5.;['It is clear that variable fnlwgt is one of the five most relevant features.', 'The variable capital-gain seems to be one of the four most relevant features.', 'The variable capital-loss discriminates between the target values, as shown in the decision tree.', 'It is possible to state that fnlwgt is the first most discriminative variable regarding the class.', 'Variable fnlwgt is one of the most relevant variables.', 'Variable fnlwgt seems to be relevant for the majority of mining tasks.', 'Variables capital-gain and educational-num seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is higher than 75%.', 'The number of False Negatives is higher than the number of True Positives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The number of True Positives is higher than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that KNN algorithm classifies (A, not B) as <=50K for any k ≤ 21974.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that KNN algorithm classifies (not A, B) as >50K for any k ≤ 541.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], the Decision Tree presented classifies (A, not B) as >50K.'] -adult_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] +vehicle_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +vehicle_histograms_numeric.png;A set of histograms of the variables ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['All variables, but the class, should be dealt with as date.', 'The variable MINORSKEWNESS can be seen as ordinal.', 'The variable GYRATIONRADIUS can be seen as ordinal without losing information.', 'Variable COMPACTNESS is balanced.', 'It is clear that variable MAJORSKEWNESS shows some outliers, but we can’t be sure of the same for variable MAJORVARIANCE.', 'Outliers seem to be a problem in the dataset.', 'Variable MINORKURTOSIS shows a high number of outlier values.', 'Variable MINORSKEWNESS doesn’t have any outliers.', 'Variable CIRCULARITY presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for RADIUS RATIO and COMPACTNESS variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for MINORKURTOSIS variable, dummification would be the most adequate encoding.', 'The variable DISTANCE CIRCULARITY can be coded as ordinal without losing information.', 'Feature generation based on variable GYRATIONRADIUS seems to be promising.', 'Feature generation based on the use of variable MAJORSKEWNESS wouldn’t be useful, but the use of COMPACTNESS seems to be promising.', 'Given the usual semantics of GYRATIONRADIUS variable, dummification would have been a better codification.', 'It is better to drop the variable COMPACTNESS than removing all records with missing values.', 'Not knowing the semantics of MAJORSKEWNESS variable, dummification could have been a more adequate codification.'] +adult_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition hours-per-week <= 41.5 and the second with the condition capital-loss <= 1820.5.;['The variable capital-loss discriminates between the target values, as shown in the decision tree.', 'Variable hours-per-week is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is higher than 60%.', 'The number of True Negatives is higher than the number of False Negatives for the presented tree.', 'The number of False Negatives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of False Positives for the presented tree.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as >50K.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that KNN algorithm classifies (A, not B) as >50K for any k ≤ 541.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that KNN algorithm classifies (not A, B) as >50K for any k ≤ 21974.'] +adult_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] adult_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -adult_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -adult_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -adult_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +adult_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +adult_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] +adult_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] adult_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -adult_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 30%.'] -adult_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables fnlwgt or educational-num can be discarded without losing information.', 'The variable educational-num can be discarded without risking losing information.', 'Variables capital-loss and capital-gain are redundant, but we can’t say the same for the pair hours-per-week and educational-num.', 'Variables capital-gain and fnlwgt are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable age seems to be relevant for the majority of mining tasks.', 'Variables capital-gain and hours-per-week seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable hours-per-week might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable capital-gain previously than variable hours-per-week.'] -adult_boxplots.png;A set of boxplots of the variables ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['Variable capital-gain is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable age shows some outliers, but we can’t be sure of the same for variable capital-gain.', 'Outliers seem to be a problem in the dataset.', 'Variable capital-loss shows some outlier values.', 'Variable hours-per-week doesn’t have any outliers.', 'Variable educational-num presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -adult_histograms_symbolic.png;A set of bar charts of the variables ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'gender'].;['All variables, but the class, should be dealt with as numeric.', 'The variable relationship can be seen as ordinal.', 'The variable relationship can be seen as ordinal without losing information.', 'Considering the common semantics for workclass and education variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for gender variable, dummification would be the most adequate encoding.', 'The variable marital-status can be coded as ordinal without losing information.', 'Feature generation based on variable education seems to be promising.', 'Feature generation based on the use of variable race wouldn’t be useful, but the use of workclass seems to be promising.', 'Given the usual semantics of education variable, dummification would have been a better codification.', 'It is better to drop the variable workclass than removing all records with missing values.', 'Not knowing the semantics of education variable, dummification could have been a more adequate codification.'] +adult_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 15 and 30%.'] +adult_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables fnlwgt or hours-per-week can be discarded without losing information.', 'The variable hours-per-week can be discarded without risking losing information.', 'Variables capital-loss and age are redundant.', 'Variables age and educational-num are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable capital-gain seems to be relevant for the majority of mining tasks.', 'Variables fnlwgt and age seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable fnlwgt might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable capital-gain previously than variable fnlwgt.'] +adult_boxplots.png;A set of boxplots of the variables ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['Variable hours-per-week is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable educational-num shows some outliers, but we can’t be sure of the same for variable fnlwgt.', 'Outliers seem to be a problem in the dataset.', 'Variable capital-loss shows a high number of outlier values.', 'Variable capital-gain doesn’t have any outliers.', 'Variable capital-gain presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +adult_histograms_symbolic.png;A set of bar charts of the variables ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'gender'].;['All variables, but the class, should be dealt with as date.', 'The variable gender can be seen as ordinal.', 'The variable education can be seen as ordinal without losing information.', 'Considering the common semantics for marital-status and workclass variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for marital-status variable, dummification would be the most adequate encoding.', 'The variable education can be coded as ordinal without losing information.', 'Feature generation based on variable marital-status seems to be promising.', 'Feature generation based on the use of variable occupation wouldn’t be useful, but the use of workclass seems to be promising.', 'Given the usual semantics of education variable, dummification would have been a better codification.', 'It is better to drop the variable relationship than removing all records with missing values.', 'Not knowing the semantics of occupation variable, dummification could have been a more adequate codification.'] adult_class_histogram.png;A bar chart showing the distribution of the target variable income.;['Balancing this dataset would be mandatory to improve the results.'] adult_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -adult_histograms_numeric.png;A set of histograms of the variables ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable capital-gain can be seen as ordinal.', 'The variable age can be seen as ordinal without losing information.', 'Variable hours-per-week is balanced.', 'It is clear that variable capital-loss shows some outliers, but we can’t be sure of the same for variable capital-gain.', 'Outliers seem to be a problem in the dataset.', 'Variable capital-loss shows some outlier values.', 'Variable fnlwgt doesn’t have any outliers.', 'Variable educational-num presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for capital-loss and age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for educational-num variable, dummification would be the most adequate encoding.', 'The variable age can be coded as ordinal without losing information.', 'Feature generation based on variable capital-loss seems to be promising.', 'Feature generation based on the use of variable hours-per-week wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of capital-gain variable, dummification would have been a better codification.', 'It is better to drop the variable educational-num than removing all records with missing values.', 'Not knowing the semantics of hours-per-week variable, dummification could have been a more adequate codification.'] -Covid_Data_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition CARDIOVASCULAR <= 50.0 and the second with the condition ASHTMA <= 1.5.;['It is clear that variable MEDICAL_UNIT is one of the five most relevant features.', 'The variable ASTHMA seems to be one of the three most relevant features.', 'The variable ASTHMA discriminates between the target values, as shown in the decision tree.', 'It is possible to state that CARDIOVASCULAR is the first most discriminative variable regarding the class.', 'Variable PREGNANT is one of the most relevant variables.', 'Variable MEDICAL_UNIT seems to be relevant for the majority of mining tasks.', 'Variables AGE and PNEUMONIA seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 75%.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The accuracy for the presented tree is lower than 75%.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (not A, B) as No for any k ≤ 16.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (A,B) as No for any k ≤ 7971.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (A,B) as Yes for any k ≤ 46.'] -Covid_Data_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -Covid_Data_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -Covid_Data_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -Covid_Data_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -Covid_Data_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +adult_histograms_numeric.png;A set of histograms of the variables ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['All variables, but the class, should be dealt with as date.', 'The variable fnlwgt can be seen as ordinal.', 'The variable hours-per-week can be seen as ordinal without losing information.', 'Variable fnlwgt is balanced.', 'It is clear that variable educational-num shows some outliers, but we can’t be sure of the same for variable capital-loss.', 'Outliers seem to be a problem in the dataset.', 'Variable educational-num shows some outlier values.', 'Variable capital-loss doesn’t have any outliers.', 'Variable age presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for capital-gain and age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for fnlwgt variable, dummification would be the most adequate encoding.', 'The variable educational-num can be coded as ordinal without losing information.', 'Feature generation based on variable educational-num seems to be promising.', 'Feature generation based on the use of variable capital-loss wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of capital-gain variable, dummification would have been a better codification.', 'It is better to drop the variable hours-per-week than removing all records with missing values.', 'Not knowing the semantics of fnlwgt variable, dummification could have been a more adequate codification.'] +Covid_Data_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition CARDIOVASCULAR <= 50.0 and the second with the condition ASHTMA <= 1.5.;['The variable ASHTMA discriminates between the target values, as shown in the decision tree.', 'Variable ASHTMA is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 75%.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The recall for the presented tree is lower than 90%.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (not A, B) as Yes for any k ≤ 46.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (A,B) as No for any k ≤ 7971.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (A, not B) as Yes for any k ≤ 173.'] +Covid_Data_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] +Covid_Data_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] +Covid_Data_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +Covid_Data_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] +Covid_Data_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] Covid_Data_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Covid_Data_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 10 principal components would imply an error between 15 and 25%.'] -Covid_Data_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables RENAL_CHRONIC or OTHER_DISEASE can be discarded without losing information.', 'The variable MEDICAL_UNIT can be discarded without risking losing information.', 'Variables TOBACCO and PREGNANT are redundant, but we can’t say the same for the pair HIPERTENSION and RENAL_CHRONIC.', 'Variables PREGNANT and HIPERTENSION are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable TOBACCO seems to be relevant for the majority of mining tasks.', 'Variables AGE and ICU seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PREGNANT might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable ICU previously than variable MEDICAL_UNIT.'] -Covid_Data_boxplots.png;A set of boxplots of the variables ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['Variable OTHER_DISEASE is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable HIPERTENSION shows some outliers, but we can’t be sure of the same for variable COPD.', 'Outliers seem to be a problem in the dataset.', 'Variable HIPERTENSION shows a high number of outlier values.', 'Variable MEDICAL_UNIT doesn’t have any outliers.', 'Variable AGE presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Covid_Data_histograms_symbolic.png;A set of bar charts of the variables ['USMER', 'SEX', 'PATIENT_TYPE'].;['All variables, but the class, should be dealt with as numeric.', 'The variable PATIENT_TYPE can be seen as ordinal.', 'The variable PATIENT_TYPE can be seen as ordinal without losing information.', 'Considering the common semantics for PATIENT_TYPE and USMER variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for USMER variable, dummification would be the most adequate encoding.', 'The variable PATIENT_TYPE can be coded as ordinal without losing information.', 'Feature generation based on variable USMER seems to be promising.', 'Feature generation based on the use of variable SEX wouldn’t be useful, but the use of USMER seems to be promising.', 'Given the usual semantics of SEX variable, dummification would have been a better codification.', 'It is better to drop the variable PATIENT_TYPE than removing all records with missing values.', 'Not knowing the semantics of USMER variable, dummification could have been a more adequate codification.'] +Covid_Data_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 11 principal components would imply an error between 15 and 25%.'] +Covid_Data_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables HIPERTENSION or RENAL_CHRONIC can be discarded without losing information.', 'The variable MEDICAL_UNIT can be discarded without risking losing information.', 'Variables PREGNANT and TOBACCO are redundant, but we can’t say the same for the pair MEDICAL_UNIT and ASTHMA.', 'Variables COPD and AGE are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable ICU seems to be relevant for the majority of mining tasks.', 'Variables HIPERTENSION and TOBACCO seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable COPD might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable ICU previously than variable PREGNANT.'] +Covid_Data_boxplots.png;A set of boxplots of the variables ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['Variable OTHER_DISEASE is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable ASTHMA shows some outliers, but we can’t be sure of the same for variable COPD.', 'Outliers seem to be a problem in the dataset.', 'Variable AGE shows some outlier values.', 'Variable ASTHMA doesn’t have any outliers.', 'Variable OTHER_DISEASE presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Covid_Data_histograms_symbolic.png;A set of bar charts of the variables ['USMER', 'SEX', 'PATIENT_TYPE'].;['All variables, but the class, should be dealt with as date.', 'The variable PATIENT_TYPE can be seen as ordinal.', 'The variable USMER can be seen as ordinal without losing information.', 'Considering the common semantics for USMER and SEX variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PATIENT_TYPE variable, dummification would be the most adequate encoding.', 'The variable PATIENT_TYPE can be coded as ordinal without losing information.', 'Feature generation based on variable SEX seems to be promising.', 'Feature generation based on the use of variable PATIENT_TYPE wouldn’t be useful, but the use of USMER seems to be promising.', 'Given the usual semantics of PATIENT_TYPE variable, dummification would have been a better codification.', 'It is better to drop the variable PATIENT_TYPE than removing all records with missing values.', 'Not knowing the semantics of SEX variable, dummification could have been a more adequate codification.'] Covid_Data_class_histogram.png;A bar chart showing the distribution of the target variable CLASSIFICATION.;['Balancing this dataset would be mandatory to improve the results.'] -Covid_Data_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Covid_Data_histograms_numeric.png;A set of histograms of the variables ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['All variables, but the class, should be dealt with as binary.', 'The variable ICU can be seen as ordinal.', 'The variable ICU can be seen as ordinal without losing information.', 'Variable PNEUMONIA is balanced.', 'It is clear that variable PNEUMONIA shows some outliers, but we can’t be sure of the same for variable HIPERTENSION.', 'Outliers seem to be a problem in the dataset.', 'Variable COPD shows some outlier values.', 'Variable COPD doesn’t have any outliers.', 'Variable TOBACCO presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for TOBACCO and MEDICAL_UNIT variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PNEUMONIA variable, dummification would be the most adequate encoding.', 'The variable OTHER_DISEASE can be coded as ordinal without losing information.', 'Feature generation based on variable COPD seems to be promising.', 'Feature generation based on the use of variable TOBACCO wouldn’t be useful, but the use of MEDICAL_UNIT seems to be promising.', 'Given the usual semantics of CARDIOVASCULAR variable, dummification would have been a better codification.', 'It is better to drop the variable ASTHMA than removing all records with missing values.', 'Not knowing the semantics of OTHER_DISEASE variable, dummification could have been a more adequate codification.'] -sky_survey_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition dec <= 22.21 and the second with the condition mjd <= 55090.5.;['It is clear that variable run is one of the two most relevant features.', 'The variable run seems to be one of the five most relevant features.', 'The variable run discriminates between the target values, as shown in the decision tree.', 'It is possible to state that dec is the first most discriminative variable regarding the class.', 'Variable redshift is one of the most relevant variables.', 'Variable field seems to be relevant for the majority of mining tasks.', 'Variables run and mjd seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 60%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of False Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Positives reported in the same tree is 10.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as QSO.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], it is possible to state that KNN algorithm classifies (A, not B) as QSO for any k ≤ 208.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], it is possible to state that KNN algorithm classifies (A, not B) as GALAXY for any k ≤ 1728.'] -sky_survey_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -sky_survey_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -sky_survey_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -sky_survey_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -sky_survey_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] -sky_survey_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 10 and 30%.'] -sky_survey_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables plate or ra can be discarded without losing information.', 'The variable ra can be discarded without risking losing information.', 'Variables run and dec are redundant, but we can’t say the same for the pair plate and field.', 'Variables field and plate are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable plate seems to be relevant for the majority of mining tasks.', 'Variables mjd and dec seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable run might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable camcol previously than variable dec.'] -sky_survey_boxplots.png;A set of boxplots of the variables ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['Variable redshift is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable ra shows some outliers, but we can’t be sure of the same for variable dec.', 'Outliers seem to be a problem in the dataset.', 'Variable redshift shows a high number of outlier values.', 'Variable mjd doesn’t have any outliers.', 'Variable field presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Covid_Data_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +Covid_Data_histograms_numeric.png;A set of histograms of the variables ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['All variables, but the class, should be dealt with as numeric.', 'The variable TOBACCO can be seen as ordinal.', 'The variable MEDICAL_UNIT can be seen as ordinal without losing information.', 'Variable ICU is balanced.', 'It is clear that variable RENAL_CHRONIC shows some outliers, but we can’t be sure of the same for variable ICU.', 'Outliers seem to be a problem in the dataset.', 'Variable OTHER_DISEASE shows some outlier values.', 'Variable MEDICAL_UNIT doesn’t have any outliers.', 'Variable PREGNANT presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for COPD and MEDICAL_UNIT variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for ASTHMA variable, dummification would be the most adequate encoding.', 'The variable PREGNANT can be coded as ordinal without losing information.', 'Feature generation based on variable ICU seems to be promising.', 'Feature generation based on the use of variable PNEUMONIA wouldn’t be useful, but the use of MEDICAL_UNIT seems to be promising.', 'Given the usual semantics of HIPERTENSION variable, dummification would have been a better codification.', 'It is better to drop the variable PREGNANT than removing all records with missing values.', 'Not knowing the semantics of COPD variable, dummification could have been a more adequate codification.'] +sky_survey_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition dec <= 22.21 and the second with the condition mjd <= 55090.5.;['The variable mjd discriminates between the target values, as shown in the decision tree.', 'Variable mjd is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 75%.', 'The number of False Negatives is higher than the number of True Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The number of False Negatives is higher than the number of False Positives for the presented tree.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], the Decision Tree presented classifies (A, not B) as QSO.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], the Decision Tree presented classifies (A, not B) as QSO.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], it is possible to state that KNN algorithm classifies (A,B) as GALAXY for any k ≤ 945.'] +sky_survey_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +sky_survey_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] +sky_survey_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +sky_survey_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] +sky_survey_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] +sky_survey_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 15 and 30%.'] +sky_survey_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables redshift or plate can be discarded without losing information.', 'The variable camcol can be discarded without risking losing information.', 'Variables run and ra are redundant, but we can’t say the same for the pair mjd and dec.', 'Variables run and redshift are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable dec seems to be relevant for the majority of mining tasks.', 'Variables camcol and mjd seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable ra might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable camcol previously than variable mjd.'] +sky_survey_boxplots.png;A set of boxplots of the variables ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['Variable plate is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable field shows some outliers, but we can’t be sure of the same for variable ra.', 'Outliers seem to be a problem in the dataset.', 'Variable field shows some outlier values.', 'Variable field doesn’t have any outliers.', 'Variable redshift presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] sky_survey_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.'] -sky_survey_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -sky_survey_histograms_numeric.png;A set of histograms of the variables ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['All variables, but the class, should be dealt with as binary.', 'The variable dec can be seen as ordinal.', 'The variable run can be seen as ordinal without losing information.', 'Variable dec is balanced.', 'It is clear that variable field shows some outliers, but we can’t be sure of the same for variable camcol.', 'Outliers seem to be a problem in the dataset.', 'Variable plate shows some outlier values.', 'Variable field doesn’t have any outliers.', 'Variable redshift presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for redshift and ra variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for plate variable, dummification would be the most adequate encoding.', 'The variable ra can be coded as ordinal without losing information.', 'Feature generation based on variable plate seems to be promising.', 'Feature generation based on the use of variable run wouldn’t be useful, but the use of ra seems to be promising.', 'Given the usual semantics of plate variable, dummification would have been a better codification.', 'It is better to drop the variable mjd than removing all records with missing values.', 'Not knowing the semantics of camcol variable, dummification could have been a more adequate codification.'] -Wine_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Total phenols <= 2.36 and the second with the condition Proanthocyanins <= 1.58.;['It is clear that variable Alcohol is one of the three most relevant features.', 'The variable OD280-OD315 of diluted wines seems to be one of the four most relevant features.', 'The variable Total phenols discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Hue is the second most discriminative variable regarding the class.', 'Variable Alcalinity of ash is one of the most relevant variables.', 'Variable Proanthocyanins seems to be relevant for the majority of mining tasks.', 'Variables Flavanoids and OD280-OD315 of diluted wines seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 90%.', 'The number of True Positives is lower than the number of True Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The accuracy for the presented tree is lower than its recall.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (not A, B) as 3 for any k ≤ 2.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 2.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A, not B) as 2 for any k ≤ 49.'] -Wine_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -Wine_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -Wine_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -Wine_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] -Wine_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] -Wine_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 25%.'] -Wine_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables Alcalinity of ash or Flavanoids can be discarded without losing information.', 'The variable Alcohol can be discarded without risking losing information.', 'Variables Ash and Flavanoids seem to be useful for classification tasks.', 'Variables Proanthocyanins and Nonflavanoid phenols are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Hue seems to be relevant for the majority of mining tasks.', 'Variables Color intensity and OD280-OD315 of diluted wines seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Malic acid might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Nonflavanoid phenols previously than variable Alcohol.'] -Wine_boxplots.png;A set of boxplots of the variables ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['Variable Flavanoids is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Flavanoids shows some outliers, but we can’t be sure of the same for variable Nonflavanoid phenols.', 'Outliers seem to be a problem in the dataset.', 'Variable Alcalinity of ash shows some outlier values.', 'Variable Alcohol doesn’t have any outliers.', 'Variable Proanthocyanins presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +sky_survey_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +sky_survey_histograms_numeric.png;A set of histograms of the variables ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['All variables, but the class, should be dealt with as date.', 'The variable run can be seen as ordinal.', 'The variable field can be seen as ordinal without losing information.', 'Variable ra is balanced.', 'It is clear that variable camcol shows some outliers, but we can’t be sure of the same for variable mjd.', 'Outliers seem to be a problem in the dataset.', 'Variable redshift shows a high number of outlier values.', 'Variable field doesn’t have any outliers.', 'Variable plate presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for ra and dec variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for field variable, dummification would be the most adequate encoding.', 'The variable camcol can be coded as ordinal without losing information.', 'Feature generation based on variable redshift seems to be promising.', 'Feature generation based on the use of variable camcol wouldn’t be useful, but the use of ra seems to be promising.', 'Given the usual semantics of ra variable, dummification would have been a better codification.', 'It is better to drop the variable redshift than removing all records with missing values.', 'Not knowing the semantics of plate variable, dummification could have been a more adequate codification.'] +Wine_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Total phenols <= 2.36 and the second with the condition Proanthocyanins <= 1.58.;['The variable Proanthocyanins discriminates between the target values, as shown in the decision tree.', 'Variable Proanthocyanins is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 75%.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of False Negatives is higher than the number of True Positives for the presented tree.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A,B) as 3 for any k ≤ 60.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 60.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A, not B) as 2 for any k ≤ 49.'] +Wine_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] +Wine_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] +Wine_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] +Wine_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] +Wine_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +Wine_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 10 principal components would imply an error between 15 and 30%.'] +Wine_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Flavanoids or Hue can be discarded without losing information.', 'The variable Color intensity can be discarded without risking losing information.', 'Variables Color intensity and Alcohol are redundant, but we can’t say the same for the pair Flavanoids and Alcalinity of ash.', 'Variables Flavanoids and Total phenols are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Ash seems to be relevant for the majority of mining tasks.', 'Variables Alcalinity of ash and Malic acid seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Alcohol might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable OD280-OD315 of diluted wines previously than variable Total phenols.'] +Wine_boxplots.png;A set of boxplots of the variables ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['Variable OD280-OD315 of diluted wines is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Nonflavanoid phenols shows some outliers, but we can’t be sure of the same for variable Color intensity.', 'Outliers seem to be a problem in the dataset.', 'Variable Hue shows some outlier values.', 'Variable Malic acid doesn’t have any outliers.', 'Variable OD280-OD315 of diluted wines presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Wine_class_histogram.png;A bar chart showing the distribution of the target variable Class.;['Balancing this dataset would be mandatory to improve the results.'] -Wine_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Wine_histograms_numeric.png;A set of histograms of the variables ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['All variables, but the class, should be dealt with as date.', 'The variable Hue can be seen as ordinal.', 'The variable Color intensity can be seen as ordinal without losing information.', 'Variable Nonflavanoid phenols is balanced.', 'It is clear that variable Nonflavanoid phenols shows some outliers, but we can’t be sure of the same for variable Ash.', 'Outliers seem to be a problem in the dataset.', 'Variable Ash shows a high number of outlier values.', 'Variable Hue doesn’t have any outliers.', 'Variable Proanthocyanins presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Hue and Alcohol variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Color intensity variable, dummification would be the most adequate encoding.', 'The variable Nonflavanoid phenols can be coded as ordinal without losing information.', 'Feature generation based on variable Alcalinity of ash seems to be promising.', 'Feature generation based on the use of variable Ash wouldn’t be useful, but the use of Alcohol seems to be promising.', 'Given the usual semantics of Proanthocyanins variable, dummification would have been a better codification.', 'It is better to drop the variable Alcalinity of ash than removing all records with missing values.', 'Not knowing the semantics of Flavanoids variable, dummification could have been a more adequate codification.'] -water_potability_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Hardness <= 278.29 and the second with the condition Chloramines <= 6.7.;['It is clear that variable Turbidity is one of the three most relevant features.', 'The variable Sulfate seems to be one of the three most relevant features.', 'The variable ph discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Conductivity is the second most discriminative variable regarding the class.', 'Variable Chloramines is one of the most relevant variables.', 'Variable Trihalomethanes seems to be relevant for the majority of mining tasks.', 'Variables Turbidity and Chloramines seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 60%.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives reported in the same tree is 50.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 8.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], the Decision Tree presented classifies (A,B) as 0.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 6.'] -water_potability_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -water_potability_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] +Wine_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +Wine_histograms_numeric.png;A set of histograms of the variables ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['All variables, but the class, should be dealt with as binary.', 'The variable Total phenols can be seen as ordinal.', 'The variable Alcohol can be seen as ordinal without losing information.', 'Variable Flavanoids is balanced.', 'It is clear that variable Color intensity shows some outliers, but we can’t be sure of the same for variable Total phenols.', 'Outliers seem to be a problem in the dataset.', 'Variable Alcalinity of ash shows some outlier values.', 'Variable Alcohol doesn’t have any outliers.', 'Variable Ash presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for OD280-OD315 of diluted wines and Alcohol variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for OD280-OD315 of diluted wines variable, dummification would be the most adequate encoding.', 'The variable Hue can be coded as ordinal without losing information.', 'Feature generation based on variable Malic acid seems to be promising.', 'Feature generation based on the use of variable Nonflavanoid phenols wouldn’t be useful, but the use of Alcohol seems to be promising.', 'Given the usual semantics of Total phenols variable, dummification would have been a better codification.', 'It is better to drop the variable Alcalinity of ash than removing all records with missing values.', 'Not knowing the semantics of Alcalinity of ash variable, dummification could have been a more adequate codification.'] +water_potability_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Hardness <= 278.29 and the second with the condition Chloramines <= 6.7.;['The variable Hardness discriminates between the target values, as shown in the decision tree.', 'Variable Chloramines is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is higher than 90%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of False Negatives is higher than the number of True Positives for the presented tree.', 'The specificity for the presented tree is lower than 60%.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that KNN algorithm classifies (not A, B) as 0 for any k ≤ 1388.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that KNN algorithm classifies (A, not B) as 1 for any k ≤ 6.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as 1.'] +water_potability_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] +water_potability_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] water_potability_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -water_potability_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -water_potability_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +water_potability_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] +water_potability_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] water_potability_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -water_potability_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 5 and 25%.'] -water_potability_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables Sulfate or ph can be discarded without losing information.', 'The variable Turbidity can be discarded without risking losing information.', 'Variables Chloramines and Trihalomethanes are redundant, but we can’t say the same for the pair Conductivity and ph.', 'Variables Hardness and Turbidity are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Turbidity seems to be relevant for the majority of mining tasks.', 'Variables Trihalomethanes and ph seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Turbidity might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Chloramines previously than variable Conductivity.'] -water_potability_boxplots.png;A set of boxplots of the variables ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['Variable Turbidity is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Hardness shows some outliers, but we can’t be sure of the same for variable Chloramines.', 'Outliers seem to be a problem in the dataset.', 'Variable Hardness shows some outlier values.', 'Variable Chloramines doesn’t have any outliers.', 'Variable Sulfate presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -water_potability_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['ph', 'Sulfate', 'Trihalomethanes'].;['Discarding variable Sulfate would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable ph seems to be promising.', 'It is better to drop the variable ph than removing all records with missing values.'] +water_potability_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 5 and 30%.'] +water_potability_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Hardness or Conductivity can be discarded without losing information.', 'The variable Turbidity can be discarded without risking losing information.', 'Variables Trihalomethanes and Hardness are redundant, but we can’t say the same for the pair Chloramines and Sulfate.', 'Variables Hardness and ph are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Turbidity seems to be relevant for the majority of mining tasks.', 'Variables Conductivity and Turbidity seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Hardness might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Turbidity previously than variable Chloramines.'] +water_potability_boxplots.png;A set of boxplots of the variables ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['Variable Trihalomethanes is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Turbidity shows some outliers, but we can’t be sure of the same for variable Sulfate.', 'Outliers seem to be a problem in the dataset.', 'Variable ph shows some outlier values.', 'Variable Turbidity doesn’t have any outliers.', 'Variable Trihalomethanes presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +water_potability_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['ph', 'Sulfate', 'Trihalomethanes'].;['Discarding variable Trihalomethanes would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 30% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Sulfate seems to be promising.', 'It is better to drop the variable Trihalomethanes than removing all records with missing values.'] water_potability_class_histogram.png;A bar chart showing the distribution of the target variable Potability.;['Balancing this dataset would be mandatory to improve the results.'] -water_potability_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -water_potability_histograms_numeric.png;A set of histograms of the variables ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Hardness can be seen as ordinal.', 'The variable ph can be seen as ordinal without losing information.', 'Variable Turbidity is balanced.', 'It is clear that variable Trihalomethanes shows some outliers, but we can’t be sure of the same for variable ph.', 'Outliers seem to be a problem in the dataset.', 'Variable Turbidity shows some outlier values.', 'Variable Conductivity doesn’t have any outliers.', 'Variable Sulfate presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Conductivity and ph variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Sulfate variable, dummification would be the most adequate encoding.', 'The variable Hardness can be coded as ordinal without losing information.', 'Feature generation based on variable Hardness seems to be promising.', 'Feature generation based on the use of variable ph wouldn’t be useful, but the use of Hardness seems to be promising.', 'Given the usual semantics of Sulfate variable, dummification would have been a better codification.', 'It is better to drop the variable Trihalomethanes than removing all records with missing values.', 'Not knowing the semantics of Sulfate variable, dummification could have been a more adequate codification.'] -abalone_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Height <= 0.13 and the second with the condition Diameter <= 0.45.;['It is clear that variable Whole weight is one of the four most relevant features.', 'The variable Rings seems to be one of the four most relevant features.', 'The variable Rings discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Viscera weight is the first most discriminative variable regarding the class.', 'Variable Viscera weight is one of the most relevant variables.', 'Variable Shell weight seems to be relevant for the majority of mining tasks.', 'Variables Shucked weight and Length seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 90%.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (A, not B) as F for any k ≤ 1191.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (A,B) as I for any k ≤ 1191.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (A,B) as F for any k ≤ 117.'] +water_potability_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +water_potability_histograms_numeric.png;A set of histograms of the variables ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['All variables, but the class, should be dealt with as date.', 'The variable Trihalomethanes can be seen as ordinal.', 'The variable Chloramines can be seen as ordinal without losing information.', 'Variable Turbidity is balanced.', 'It is clear that variable Chloramines shows some outliers, but we can’t be sure of the same for variable Trihalomethanes.', 'Outliers seem to be a problem in the dataset.', 'Variable Trihalomethanes shows a high number of outlier values.', 'Variable Turbidity doesn’t have any outliers.', 'Variable Sulfate presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for ph and Hardness variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Turbidity variable, dummification would be the most adequate encoding.', 'The variable Conductivity can be coded as ordinal without losing information.', 'Feature generation based on variable Chloramines seems to be promising.', 'Feature generation based on the use of variable Conductivity wouldn’t be useful, but the use of ph seems to be promising.', 'Given the usual semantics of Chloramines variable, dummification would have been a better codification.', 'It is better to drop the variable Hardness than removing all records with missing values.', 'Not knowing the semantics of ph variable, dummification could have been a more adequate codification.'] +abalone_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Height <= 0.13 and the second with the condition Diameter <= 0.45.;['The variable Diameter discriminates between the target values, as shown in the decision tree.', 'Variable Diameter is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Negatives is higher than the number of False Positives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], the Decision Tree presented classifies (not A, B) as I.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (A, not B) as M for any k ≤ 117.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (not A, not B) as M for any k ≤ 1191.'] abalone_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] abalone_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -abalone_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -abalone_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -abalone_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -abalone_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 15 and 30%.'] -abalone_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables Rings or Shucked weight can be discarded without losing information.', 'The variable Height can be discarded without risking losing information.', 'Variables Shucked weight and Whole weight are redundant, but we can’t say the same for the pair Diameter and Rings.', 'Variables Viscera weight and Diameter are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Diameter seems to be relevant for the majority of mining tasks.', 'Variables Shell weight and Length seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Length might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Diameter previously than variable Length.'] -abalone_boxplots.png;A set of boxplots of the variables ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['Variable Height is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Shucked weight shows some outliers, but we can’t be sure of the same for variable Shell weight.', 'Outliers seem to be a problem in the dataset.', 'Variable Rings shows a high number of outlier values.', 'Variable Viscera weight doesn’t have any outliers.', 'Variable Length presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +abalone_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +abalone_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] +abalone_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] +abalone_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 15 and 30%.'] +abalone_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables Whole weight or Length can be discarded without losing information.', 'The variable Whole weight can be discarded without risking losing information.', 'Variables Length and Height are redundant, but we can’t say the same for the pair Whole weight and Viscera weight.', 'Variables Diameter and Length are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Whole weight seems to be relevant for the majority of mining tasks.', 'Variables Whole weight and Length seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Rings might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Length previously than variable Height.'] +abalone_boxplots.png;A set of boxplots of the variables ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['Variable Rings is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Shell weight shows some outliers, but we can’t be sure of the same for variable Viscera weight.', 'Outliers seem to be a problem in the dataset.', 'Variable Rings shows a high number of outlier values.', 'Variable Shell weight doesn’t have any outliers.', 'Variable Shell weight presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] abalone_class_histogram.png;A bar chart showing the distribution of the target variable Sex.;['Balancing this dataset would be mandatory to improve the results.'] -abalone_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -abalone_histograms_numeric.png;A set of histograms of the variables ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Diameter can be seen as ordinal.', 'The variable Whole weight can be seen as ordinal without losing information.', 'Variable Rings is balanced.', 'It is clear that variable Height shows some outliers, but we can’t be sure of the same for variable Shell weight.', 'Outliers seem to be a problem in the dataset.', 'Variable Viscera weight shows some outlier values.', 'Variable Shucked weight doesn’t have any outliers.', 'Variable Viscera weight presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Rings and Length variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Whole weight variable, dummification would be the most adequate encoding.', 'The variable Height can be coded as ordinal without losing information.', 'Feature generation based on variable Whole weight seems to be promising.', 'Feature generation based on the use of variable Diameter wouldn’t be useful, but the use of Length seems to be promising.', 'Given the usual semantics of Rings variable, dummification would have been a better codification.', 'It is better to drop the variable Diameter than removing all records with missing values.', 'Not knowing the semantics of Shell weight variable, dummification could have been a more adequate codification.'] -smoking_drinking_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition SMK_stat_type_cd <= 1.5 and the second with the condition gamma_GTP <= 35.5.;['It is clear that variable weight is one of the four most relevant features.', 'The variable triglyceride seems to be one of the five most relevant features.', 'The variable age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that gamma_GTP is the first most discriminative variable regarding the class.', 'Variable height is one of the most relevant variables.', 'Variable SMK_stat_type_cd seems to be relevant for the majority of mining tasks.', 'Variables LDL_chole and hemoglobin seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is lower than 75%.', 'The number of True Positives is higher than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The variable SBP seems to be one of the five most relevant features.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that KNN algorithm classifies (not A, B) as N for any k ≤ 3135.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as Y.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that KNN algorithm classifies (not A, B) as Y for any k ≤ 2793.'] -smoking_drinking_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +abalone_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +abalone_histograms_numeric.png;A set of histograms of the variables ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['All variables, but the class, should be dealt with as date.', 'The variable Shucked weight can be seen as ordinal.', 'The variable Shucked weight can be seen as ordinal without losing information.', 'Variable Shell weight is balanced.', 'It is clear that variable Rings shows some outliers, but we can’t be sure of the same for variable Whole weight.', 'Outliers seem to be a problem in the dataset.', 'Variable Viscera weight shows some outlier values.', 'Variable Diameter doesn’t have any outliers.', 'Variable Length presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Diameter and Length variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Length variable, dummification would be the most adequate encoding.', 'The variable Diameter can be coded as ordinal without losing information.', 'Feature generation based on variable Shucked weight seems to be promising.', 'Feature generation based on the use of variable Diameter wouldn’t be useful, but the use of Length seems to be promising.', 'Given the usual semantics of Viscera weight variable, dummification would have been a better codification.', 'It is better to drop the variable Shucked weight than removing all records with missing values.', 'Not knowing the semantics of Shell weight variable, dummification could have been a more adequate codification.'] +smoking_drinking_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition SMK_stat_type_cd <= 1.5 and the second with the condition gamma_GTP <= 35.5.;['The variable SMK_stat_type_cd discriminates between the target values, as shown in the decision tree.', 'Variable gamma_GTP is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is higher than 90%.', 'The number of False Negatives reported in the same tree is 10.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that KNN algorithm classifies (A,B) as N for any k ≤ 3135.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as N.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], the Decision Tree presented classifies (A, not B) as Y.'] +smoking_drinking_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] smoking_drinking_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] smoking_drinking_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -smoking_drinking_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -smoking_drinking_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +smoking_drinking_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] +smoking_drinking_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] smoking_drinking_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -smoking_drinking_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.'] -smoking_drinking_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['The intrinsic dimensionality of this dataset is 7.', 'One of the variables SMK_stat_type_cd or SBP can be discarded without losing information.', 'The variable SBP can be discarded without risking losing information.', 'Variables waistline and height are redundant, but we can’t say the same for the pair triglyceride and SMK_stat_type_cd.', 'Variables waistline and LDL_chole are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable weight seems to be relevant for the majority of mining tasks.', 'Variables tot_chole and triglyceride seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable waistline might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable height previously than variable weight.'] -smoking_drinking_boxplots.png;A set of boxplots of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['Variable tot_chole is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable tot_chole shows some outliers, but we can’t be sure of the same for variable waistline.', 'Outliers seem to be a problem in the dataset.', 'Variable weight shows a high number of outlier values.', 'Variable LDL_chole doesn’t have any outliers.', 'Variable gamma_GTP presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -smoking_drinking_histograms_symbolic.png;A set of bar charts of the variables ['sex', 'hear_left', 'hear_right'].;['All variables, but the class, should be dealt with as numeric.', 'The variable sex can be seen as ordinal.', 'The variable hear_left can be seen as ordinal without losing information.', 'Considering the common semantics for hear_right and sex variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for hear_left variable, dummification would be the most adequate encoding.', 'The variable sex can be coded as ordinal without losing information.', 'Feature generation based on variable hear_right seems to be promising.', 'Feature generation based on the use of variable sex wouldn’t be useful, but the use of hear_left seems to be promising.', 'Given the usual semantics of sex variable, dummification would have been a better codification.', 'It is better to drop the variable sex than removing all records with missing values.', 'Not knowing the semantics of hear_right variable, dummification could have been a more adequate codification.'] +smoking_drinking_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 15 and 30%.'] +smoking_drinking_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['The intrinsic dimensionality of this dataset is 7.', 'One of the variables waistline or age can be discarded without losing information.', 'The variable hemoglobin can be discarded without risking losing information.', 'Variables BLDS and weight are redundant, but we can’t say the same for the pair waistline and LDL_chole.', 'Variables age and height are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable age seems to be relevant for the majority of mining tasks.', 'Variables height and waistline seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable waistline might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable LDL_chole previously than variable SMK_stat_type_cd.'] +smoking_drinking_boxplots.png;A set of boxplots of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['Variable waistline is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable tot_chole shows some outliers, but we can’t be sure of the same for variable waistline.', 'Outliers seem to be a problem in the dataset.', 'Variable tot_chole shows a high number of outlier values.', 'Variable SMK_stat_type_cd doesn’t have any outliers.', 'Variable BLDS presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +smoking_drinking_histograms_symbolic.png;A set of bar charts of the variables ['sex', 'hear_left', 'hear_right'].;['All variables, but the class, should be dealt with as binary.', 'The variable hear_right can be seen as ordinal.', 'The variable hear_right can be seen as ordinal without losing information.', 'Considering the common semantics for hear_left and sex variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for hear_right variable, dummification would be the most adequate encoding.', 'The variable sex can be coded as ordinal without losing information.', 'Feature generation based on variable hear_left seems to be promising.', 'Feature generation based on the use of variable hear_right wouldn’t be useful, but the use of sex seems to be promising.', 'Given the usual semantics of hear_right variable, dummification would have been a better codification.', 'It is better to drop the variable hear_left than removing all records with missing values.', 'Not knowing the semantics of hear_right variable, dummification could have been a more adequate codification.'] smoking_drinking_class_histogram.png;A bar chart showing the distribution of the target variable DRK_YN.;['Balancing this dataset would be mandatory to improve the results.'] -smoking_drinking_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -smoking_drinking_histograms_numeric.png;A set of histograms of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable SBP can be seen as ordinal.', 'The variable SMK_stat_type_cd can be seen as ordinal without losing information.', 'Variable gamma_GTP is balanced.', 'It is clear that variable SBP shows some outliers, but we can’t be sure of the same for variable waistline.', 'Outliers seem to be a problem in the dataset.', 'Variable SMK_stat_type_cd shows a high number of outlier values.', 'Variable gamma_GTP doesn’t have any outliers.', 'Variable age presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for age and height variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for waistline variable, dummification would be the most adequate encoding.', 'The variable BLDS can be coded as ordinal without losing information.', 'Feature generation based on variable LDL_chole seems to be promising.', 'Feature generation based on the use of variable SMK_stat_type_cd wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of age variable, dummification would have been a better codification.', 'It is better to drop the variable SMK_stat_type_cd than removing all records with missing values.', 'Not knowing the semantics of gamma_GTP variable, dummification could have been a more adequate codification.'] -BankNoteAuthentication_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition skewness <= 5.16 and the second with the condition curtosis <= 0.19.;['It is clear that variable entropy is one of the four most relevant features.', 'The variable entropy seems to be one of the two most relevant features.', 'The variable variance discriminates between the target values, as shown in the decision tree.', 'It is possible to state that entropy is the second most discriminative variable regarding the class.', 'Variable entropy is one of the most relevant variables.', 'Variable variance seems to be relevant for the majority of mining tasks.', 'Variables entropy and curtosis seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 90%.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (A, not B) as 1 for any k ≤ 214.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 179.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], the Decision Tree presented classifies (not A, B) as 0.'] -BankNoteAuthentication_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] +smoking_drinking_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +smoking_drinking_histograms_numeric.png;A set of histograms of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['All variables, but the class, should be dealt with as date.', 'The variable SBP can be seen as ordinal.', 'The variable tot_chole can be seen as ordinal without losing information.', 'Variable weight is balanced.', 'It is clear that variable height shows some outliers, but we can’t be sure of the same for variable age.', 'Outliers seem to be a problem in the dataset.', 'Variable LDL_chole shows some outlier values.', 'Variable tot_chole doesn’t have any outliers.', 'Variable gamma_GTP presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for age and height variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for weight variable, dummification would be the most adequate encoding.', 'The variable hemoglobin can be coded as ordinal without losing information.', 'Feature generation based on variable waistline seems to be promising.', 'Feature generation based on the use of variable height wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of BLDS variable, dummification would have been a better codification.', 'It is better to drop the variable SMK_stat_type_cd than removing all records with missing values.', 'Not knowing the semantics of hemoglobin variable, dummification could have been a more adequate codification.'] +BankNoteAuthentication_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition skewness <= 5.16 and the second with the condition curtosis <= 0.19.;['The variable curtosis discriminates between the target values, as shown in the decision tree.', 'Variable skewness is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 75%.', 'The number of False Positives reported in the same tree is 10.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The recall for the presented tree is lower than its accuracy.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 214.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 214.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 131.'] +BankNoteAuthentication_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] BankNoteAuthentication_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -BankNoteAuthentication_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -BankNoteAuthentication_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -BankNoteAuthentication_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] +BankNoteAuthentication_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +BankNoteAuthentication_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] +BankNoteAuthentication_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] BankNoteAuthentication_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -BankNoteAuthentication_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 5 and 25%.'] -BankNoteAuthentication_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['variance', 'skewness', 'curtosis', 'entropy'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables variance or curtosis can be discarded without losing information.', 'The variable entropy can be discarded without risking losing information.', 'Variables entropy and variance are redundant, but we can’t say the same for the pair skewness and curtosis.', 'Variables variance and curtosis are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable curtosis seems to be relevant for the majority of mining tasks.', 'Variables entropy and skewness seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable variance might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable variance previously than variable entropy.'] -BankNoteAuthentication_boxplots.png;A set of boxplots of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['Variable curtosis is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable curtosis shows some outliers, but we can’t be sure of the same for variable skewness.', 'Outliers seem to be a problem in the dataset.', 'Variable entropy shows a high number of outlier values.', 'Variable curtosis doesn’t have any outliers.', 'Variable variance presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +BankNoteAuthentication_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 30%.'] +BankNoteAuthentication_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['variance', 'skewness', 'curtosis', 'entropy'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables variance or curtosis can be discarded without losing information.', 'The variable skewness can be discarded without risking losing information.', 'Variables entropy and curtosis are redundant, but we can’t say the same for the pair variance and skewness.', 'Variables curtosis and entropy are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable skewness seems to be relevant for the majority of mining tasks.', 'Variables curtosis and variance seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable variance might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable variance previously than variable skewness.'] +BankNoteAuthentication_boxplots.png;A set of boxplots of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['Variable curtosis is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable curtosis shows some outliers, but we can’t be sure of the same for variable entropy.', 'Outliers seem to be a problem in the dataset.', 'Variable skewness shows a high number of outlier values.', 'Variable skewness doesn’t have any outliers.', 'Variable variance presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] BankNoteAuthentication_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.'] BankNoteAuthentication_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -BankNoteAuthentication_histograms_numeric.png;A set of histograms of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable skewness can be seen as ordinal.', 'The variable curtosis can be seen as ordinal without losing information.', 'Variable skewness is balanced.', 'It is clear that variable variance shows some outliers, but we can’t be sure of the same for variable entropy.', 'Outliers seem to be a problem in the dataset.', 'Variable variance shows some outlier values.', 'Variable curtosis doesn’t have any outliers.', 'Variable skewness presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for skewness and variance variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for curtosis variable, dummification would be the most adequate encoding.', 'The variable skewness can be coded as ordinal without losing information.', 'Feature generation based on variable entropy seems to be promising.', 'Feature generation based on the use of variable curtosis wouldn’t be useful, but the use of variance seems to be promising.', 'Given the usual semantics of variance variable, dummification would have been a better codification.', 'It is better to drop the variable curtosis than removing all records with missing values.', 'Not knowing the semantics of variance variable, dummification could have been a more adequate codification.'] -Iris_decision_tree.png;;['It is clear that variable SepalLengthCm is one of the four most relevant features.', 'The variable SepalWidthCm seems to be one of the four most relevant features.', 'The variable SepalLengthCm discriminates between the target values, as shown in the decision tree.', 'It is possible to state that SepalLengthCm is the first most discriminative variable regarding the class.', 'Variable SepalWidthCm is one of the most relevant variables.', 'Variable SepalWidthCm seems to be relevant for the majority of mining tasks.', 'Variables SepalLengthCm and PetalLengthCm seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is higher than 90%.', 'The number of False Negatives is lower than the number of True Positives for the presented tree.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'The variable PetalWidthCm discriminates between the target values, as shown in the decision tree.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], the Decision Tree presented classifies (not A, B) as Iris-versicolor.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (not A, not B) as Iris-virginica for any k ≤ 38.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (A, not B) as Iris-setosa for any k ≤ 35.'] -Iris_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] +BankNoteAuthentication_histograms_numeric.png;A set of histograms of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable skewness can be seen as ordinal.', 'The variable skewness can be seen as ordinal without losing information.', 'Variable variance is balanced.', 'It is clear that variable variance shows some outliers, but we can’t be sure of the same for variable entropy.', 'Outliers seem to be a problem in the dataset.', 'Variable variance shows a high number of outlier values.', 'Variable skewness doesn’t have any outliers.', 'Variable curtosis presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for variance and skewness variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for curtosis variable, dummification would be the most adequate encoding.', 'The variable entropy can be coded as ordinal without losing information.', 'Feature generation based on variable skewness seems to be promising.', 'Feature generation based on the use of variable curtosis wouldn’t be useful, but the use of variance seems to be promising.', 'Given the usual semantics of curtosis variable, dummification would have been a better codification.', 'It is better to drop the variable skewness than removing all records with missing values.', 'Not knowing the semantics of entropy variable, dummification could have been a more adequate codification.'] +Iris_decision_tree.png;;['The variable PetalWidthCm discriminates between the target values, as shown in the decision tree.', 'Variable PetalWidthCm is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 75%.', 'The number of True Negatives reported in the same tree is 30.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The precision for the presented tree is lower than 90%.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], the Decision Tree presented classifies (not A, not B) as Iris-versicolor.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (A,B) as Iris-virginica for any k ≤ 38.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (A,B) as Iris-setosa for any k ≤ 32.'] +Iris_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] Iris_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -Iris_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -Iris_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -Iris_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] -Iris_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 5 and 30%.'] -Iris_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables PetalWidthCm or SepalLengthCm can be discarded without losing information.', 'The variable SepalLengthCm can be discarded without risking losing information.', 'Variables PetalWidthCm and SepalLengthCm seem to be useful for classification tasks.', 'Variables SepalWidthCm and PetalLengthCm are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable PetalLengthCm seems to be relevant for the majority of mining tasks.', 'Variables PetalWidthCm and SepalWidthCm seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable SepalLengthCm might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable SepalWidthCm previously than variable PetalLengthCm.'] -Iris_boxplots.png;A set of boxplots of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['Variable PetalWidthCm is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable SepalLengthCm shows some outliers, but we can’t be sure of the same for variable SepalWidthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable PetalWidthCm shows a high number of outlier values.', 'Variable SepalWidthCm doesn’t have any outliers.', 'Variable PetalWidthCm presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Iris_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +Iris_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] +Iris_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +Iris_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 25%.'] +Iris_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables PetalWidthCm or SepalLengthCm can be discarded without losing information.', 'The variable PetalLengthCm can be discarded without risking losing information.', 'Variables SepalLengthCm and SepalWidthCm are redundant, but we can’t say the same for the pair PetalLengthCm and PetalWidthCm.', 'Variables PetalLengthCm and SepalLengthCm are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable SepalLengthCm seems to be relevant for the majority of mining tasks.', 'Variables PetalLengthCm and SepalLengthCm seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PetalWidthCm might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable PetalLengthCm previously than variable SepalWidthCm.'] +Iris_boxplots.png;A set of boxplots of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['Variable PetalWidthCm is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable SepalWidthCm shows some outliers, but we can’t be sure of the same for variable PetalLengthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable PetalLengthCm shows some outlier values.', 'Variable SepalLengthCm doesn’t have any outliers.', 'Variable PetalWidthCm presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] Iris_class_histogram.png;A bar chart showing the distribution of the target variable Species.;['Balancing this dataset would be mandatory to improve the results.'] Iris_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Iris_histograms_numeric.png;A set of histograms of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['All variables, but the class, should be dealt with as numeric.', 'The variable PetalWidthCm can be seen as ordinal.', 'The variable SepalLengthCm can be seen as ordinal without losing information.', 'Variable PetalWidthCm is balanced.', 'It is clear that variable PetalLengthCm shows some outliers, but we can’t be sure of the same for variable SepalWidthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable SepalWidthCm shows some outlier values.', 'Variable SepalWidthCm doesn’t have any outliers.', 'Variable PetalWidthCm presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for PetalWidthCm and SepalLengthCm variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for SepalWidthCm variable, dummification would be the most adequate encoding.', 'The variable PetalWidthCm can be coded as ordinal without losing information.', 'Feature generation based on variable SepalLengthCm seems to be promising.', 'Feature generation based on the use of variable PetalWidthCm wouldn’t be useful, but the use of SepalLengthCm seems to be promising.', 'Given the usual semantics of SepalWidthCm variable, dummification would have been a better codification.', 'It is better to drop the variable PetalWidthCm than removing all records with missing values.', 'Not knowing the semantics of SepalLengthCm variable, dummification could have been a more adequate codification.'] -phone_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition int_memory <= 30.5 and the second with the condition mobile_wt <= 91.5.;['It is clear that variable sc_w is one of the four most relevant features.', 'The variable fc seems to be one of the two most relevant features.', 'The variable int_memory discriminates between the target values, as shown in the decision tree.', 'It is possible to state that sc_h is the second most discriminative variable regarding the class.', 'Variable pc is one of the most relevant variables.', 'Variable n_cores seems to be relevant for the majority of mining tasks.', 'Variables ram and fc seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 75%.', 'The number of True Negatives reported in the same tree is 30.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The variable sc_w seems to be one of the three most relevant features.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], it is possible to state that KNN algorithm classifies (not A, not B) as 2 for any k ≤ 636.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 469.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 469.'] -phone_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -phone_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -phone_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -phone_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -phone_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] -phone_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 8 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 10 and 20%.'] -phone_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['The intrinsic dimensionality of this dataset is 9.', 'One of the variables px_height or n_cores can be discarded without losing information.', 'The variable px_width can be discarded without risking losing information.', 'Variables battery_power and ram are redundant, but we can’t say the same for the pair px_width and pc.', 'Variables sc_w and battery_power are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable n_cores seems to be relevant for the majority of mining tasks.', 'Variables sc_w and n_cores seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable battery_power might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable sc_w previously than variable sc_h.'] -phone_boxplots.png;A set of boxplots of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['Variable sc_h is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable battery_power shows some outliers, but we can’t be sure of the same for variable talk_time.', 'Outliers seem to be a problem in the dataset.', 'Variable sc_h shows some outlier values.', 'Variable fc doesn’t have any outliers.', 'Variable talk_time presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -phone_histograms_symbolic.png;A set of bar charts of the variables ['blue', 'dual_sim', 'four_g', 'three_g', 'touch_screen', 'wifi'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable four_g can be seen as ordinal.', 'The variable three_g can be seen as ordinal without losing information.', 'Considering the common semantics for touch_screen and blue variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for three_g variable, dummification would be the most adequate encoding.', 'The variable dual_sim can be coded as ordinal without losing information.', 'Feature generation based on variable four_g seems to be promising.', 'Feature generation based on the use of variable touch_screen wouldn’t be useful, but the use of blue seems to be promising.', 'Given the usual semantics of dual_sim variable, dummification would have been a better codification.', 'It is better to drop the variable wifi than removing all records with missing values.', 'Not knowing the semantics of touch_screen variable, dummification could have been a more adequate codification.'] +Iris_histograms_numeric.png;A set of histograms of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['All variables, but the class, should be dealt with as date.', 'The variable PetalWidthCm can be seen as ordinal.', 'The variable SepalLengthCm can be seen as ordinal without losing information.', 'Variable PetalWidthCm is balanced.', 'It is clear that variable PetalWidthCm shows some outliers, but we can’t be sure of the same for variable SepalLengthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable SepalWidthCm shows a high number of outlier values.', 'Variable SepalWidthCm doesn’t have any outliers.', 'Variable PetalLengthCm presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for PetalWidthCm and SepalLengthCm variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PetalLengthCm variable, dummification would be the most adequate encoding.', 'The variable PetalLengthCm can be coded as ordinal without losing information.', 'Feature generation based on variable SepalWidthCm seems to be promising.', 'Feature generation based on the use of variable PetalLengthCm wouldn’t be useful, but the use of SepalLengthCm seems to be promising.', 'Given the usual semantics of PetalLengthCm variable, dummification would have been a better codification.', 'It is better to drop the variable SepalWidthCm than removing all records with missing values.', 'Not knowing the semantics of SepalWidthCm variable, dummification could have been a more adequate codification.'] +phone_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition int_memory <= 30.5 and the second with the condition mobile_wt <= 91.5.;['The variable mobile_wt discriminates between the target values, as shown in the decision tree.', 'Variable mobile_wt is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 60%.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], it is possible to state that KNN algorithm classifies (A,B) as 2 for any k ≤ 636.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], the Decision Tree presented classifies (A, not B) as 0.'] +phone_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] +phone_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] +phone_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] +phone_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] +phone_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] +phone_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 8 principal components are enough for explaining half the data variance.', 'Using the first 11 principal components would imply an error between 10 and 25%.'] +phone_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['The intrinsic dimensionality of this dataset is 11.', 'One of the variables px_height or battery_power can be discarded without losing information.', 'The variable battery_power can be discarded without risking losing information.', 'Variables ram and px_width are redundant, but we can’t say the same for the pair mobile_wt and sc_h.', 'Variables px_height and sc_w are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable n_cores seems to be relevant for the majority of mining tasks.', 'Variables sc_h and fc seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable sc_h might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable px_height previously than variable px_width.'] +phone_boxplots.png;A set of boxplots of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['Variable n_cores is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable talk_time shows some outliers, but we can’t be sure of the same for variable px_width.', 'Outliers seem to be a problem in the dataset.', 'Variable px_height shows some outlier values.', 'Variable sc_w doesn’t have any outliers.', 'Variable pc presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +phone_histograms_symbolic.png;A set of bar charts of the variables ['blue', 'dual_sim', 'four_g', 'three_g', 'touch_screen', 'wifi'].;['All variables, but the class, should be dealt with as date.', 'The variable four_g can be seen as ordinal.', 'The variable wifi can be seen as ordinal without losing information.', 'Considering the common semantics for touch_screen and blue variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for three_g variable, dummification would be the most adequate encoding.', 'The variable three_g can be coded as ordinal without losing information.', 'Feature generation based on variable four_g seems to be promising.', 'Feature generation based on the use of variable three_g wouldn’t be useful, but the use of blue seems to be promising.', 'Given the usual semantics of three_g variable, dummification would have been a better codification.', 'It is better to drop the variable three_g than removing all records with missing values.', 'Not knowing the semantics of four_g variable, dummification could have been a more adequate codification.'] phone_class_histogram.png;A bar chart showing the distribution of the target variable price_range.;['Balancing this dataset would be mandatory to improve the results.'] -phone_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -phone_histograms_numeric.png;A set of histograms of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['All variables, but the class, should be dealt with as date.', 'The variable pc can be seen as ordinal.', 'The variable int_memory can be seen as ordinal without losing information.', 'Variable n_cores is balanced.', 'It is clear that variable int_memory shows some outliers, but we can’t be sure of the same for variable sc_h.', 'Outliers seem to be a problem in the dataset.', 'Variable talk_time shows a high number of outlier values.', 'Variable battery_power doesn’t have any outliers.', 'Variable ram presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for int_memory and battery_power variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for mobile_wt variable, dummification would be the most adequate encoding.', 'The variable fc can be coded as ordinal without losing information.', 'Feature generation based on variable ram seems to be promising.', 'Feature generation based on the use of variable sc_w wouldn’t be useful, but the use of battery_power seems to be promising.', 'Given the usual semantics of px_width variable, dummification would have been a better codification.', 'It is better to drop the variable sc_w than removing all records with missing values.', 'Not knowing the semantics of battery_power variable, dummification could have been a more adequate codification.'] -Titanic_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Pclass <= 2.5 and the second with the condition Parch <= 0.5.;['It is clear that variable Pclass is one of the five most relevant features.', 'The variable Pclass seems to be one of the four most relevant features.', 'The variable Age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Age is the first most discriminative variable regarding the class.', 'Variable Pclass is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Parch and SibSp seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 0.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, B) as 0 for any k ≤ 72.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, not B) as 0 for any k ≤ 181.'] -Titanic_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] +phone_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +phone_histograms_numeric.png;A set of histograms of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['All variables, but the class, should be dealt with as binary.', 'The variable int_memory can be seen as ordinal.', 'The variable fc can be seen as ordinal without losing information.', 'Variable sc_h is balanced.', 'It is clear that variable sc_w shows some outliers, but we can’t be sure of the same for variable sc_h.', 'Outliers seem to be a problem in the dataset.', 'Variable pc shows a high number of outlier values.', 'Variable ram doesn’t have any outliers.', 'Variable fc presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for px_height and battery_power variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for px_height variable, dummification would be the most adequate encoding.', 'The variable battery_power can be coded as ordinal without losing information.', 'Feature generation based on variable mobile_wt seems to be promising.', 'Feature generation based on the use of variable sc_h wouldn’t be useful, but the use of battery_power seems to be promising.', 'Given the usual semantics of mobile_wt variable, dummification would have been a better codification.', 'It is better to drop the variable mobile_wt than removing all records with missing values.', 'Not knowing the semantics of talk_time variable, dummification could have been a more adequate codification.'] +Titanic_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Pclass <= 2.5 and the second with the condition Parch <= 0.5.;['The variable Parch discriminates between the target values, as shown in the decision tree.', 'Variable Parch is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is lower than 75%.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 181.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 1.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 0.'] +Titanic_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] Titanic_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -Titanic_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -Titanic_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] -Titanic_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] +Titanic_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] +Titanic_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] +Titanic_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] Titanic_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Titanic_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 10 and 20%.'] -Titanic_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Fare or Pclass can be discarded without losing information.', 'The variable Pclass can be discarded without risking losing information.', 'Variables Age and Parch are redundant, but we can’t say the same for the pair Fare and Pclass.', 'Variables SibSp and Fare are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Parch and Fare seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Parch might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Parch previously than variable Age.'] -Titanic_boxplots.png;A set of boxplots of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['Variable Fare is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable Pclass.', 'Outliers seem to be a problem in the dataset.', 'Variable Parch shows some outlier values.', 'Variable Parch doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Titanic_histograms_symbolic.png;A set of bar charts of the variables ['Embarked', 'Sex'].;['All variables, but the class, should be dealt with as date.', 'The variable Embarked can be seen as ordinal.', 'The variable Embarked can be seen as ordinal without losing information.', 'Considering the common semantics for Sex and Embarked variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Embarked variable, dummification would be the most adequate encoding.', 'The variable Embarked can be coded as ordinal without losing information.', 'Feature generation based on variable Sex seems to be promising.', 'Feature generation based on the use of variable Sex wouldn’t be useful, but the use of Embarked seems to be promising.', 'Given the usual semantics of Embarked variable, dummification would have been a better codification.', 'It is better to drop the variable Embarked than removing all records with missing values.', 'Not knowing the semantics of Sex variable, dummification could have been a more adequate codification.'] -Titanic_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Age', 'Embarked'].;['Discarding variable Age would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Embarked seems to be promising.', 'It is better to drop the variable Embarked than removing all records with missing values.'] +Titanic_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 20%.'] +Titanic_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables SibSp or Parch can be discarded without losing information.', 'The variable Parch can be discarded without risking losing information.', 'Variables Fare and Age seem to be useful for classification tasks.', 'Variables Age and Fare are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Pclass seems to be relevant for the majority of mining tasks.', 'Variables Parch and SibSp seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable SibSp might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Fare previously than variable Age.'] +Titanic_boxplots.png;A set of boxplots of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['Variable Age is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Pclass shows some outliers, but we can’t be sure of the same for variable Fare.', 'Outliers seem to be a problem in the dataset.', 'Variable Fare shows a high number of outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Titanic_histograms_symbolic.png;A set of bar charts of the variables ['Embarked', 'Sex'].;['All variables, but the class, should be dealt with as date.', 'The variable Sex can be seen as ordinal.', 'The variable Sex can be seen as ordinal without losing information.', 'Considering the common semantics for Sex and Embarked variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Embarked variable, dummification would be the most adequate encoding.', 'The variable Embarked can be coded as ordinal without losing information.', 'Feature generation based on variable Sex seems to be promising.', 'Feature generation based on the use of variable Sex wouldn’t be useful, but the use of Embarked seems to be promising.', 'Given the usual semantics of Sex variable, dummification would have been a better codification.', 'It is better to drop the variable Embarked than removing all records with missing values.', 'Not knowing the semantics of Embarked variable, dummification could have been a more adequate codification.'] +Titanic_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Age', 'Embarked'].;['Discarding variable Embarked would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Embarked seems to be promising.', 'It is better to drop the variable Age than removing all records with missing values.'] Titanic_class_histogram.png;A bar chart showing the distribution of the target variable Survived.;['Balancing this dataset would be mandatory to improve the results.'] -Titanic_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Titanic_histograms_numeric.png;A set of histograms of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['All variables, but the class, should be dealt with as date.', 'The variable Age can be seen as ordinal.', 'The variable Fare can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable Parch shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Parch shows some outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Fare and Pclass variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Fare variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable SibSp seems to be promising.', 'Feature generation based on the use of variable Fare wouldn’t be useful, but the use of Pclass seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable SibSp than removing all records with missing values.', 'Not knowing the semantics of Parch variable, dummification could have been a more adequate codification.'] -apple_quality_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Juiciness <= -0.3 and the second with the condition Crunchiness <= 2.25.;['It is clear that variable Sweetness is one of the four most relevant features.', 'The variable Sweetness seems to be one of the three most relevant features.', 'The variable Size discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Crunchiness is the second most discriminative variable regarding the class.', 'Variable Juiciness is one of the most relevant variables.', 'Variable Crunchiness seems to be relevant for the majority of mining tasks.', 'Variables Sweetness and Acidity seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 90%.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The number of True Positives reported in the same tree is 50.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that KNN algorithm classifies (not A, not B) as good for any k ≤ 1625.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that KNN algorithm classifies (not A, not B) as good for any k ≤ 148.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that KNN algorithm classifies (A, not B) as bad for any k ≤ 148.'] -apple_quality_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -apple_quality_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] +Titanic_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +Titanic_histograms_numeric.png;A set of histograms of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Parch can be seen as ordinal.', 'The variable Fare can be seen as ordinal without losing information.', 'Variable Pclass is balanced.', 'It is clear that variable Parch shows some outliers, but we can’t be sure of the same for variable SibSp.', 'Outliers seem to be a problem in the dataset.', 'Variable Age shows a high number of outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Age and Pclass variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for SibSp variable, dummification would be the most adequate encoding.', 'The variable Pclass can be coded as ordinal without losing information.', 'Feature generation based on variable Parch seems to be promising.', 'Feature generation based on the use of variable Age wouldn’t be useful, but the use of Pclass seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable Pclass than removing all records with missing values.', 'Not knowing the semantics of SibSp variable, dummification could have been a more adequate codification.'] +apple_quality_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Juiciness <= -0.3 and the second with the condition Crunchiness <= 2.25.;['The variable Crunchiness discriminates between the target values, as shown in the decision tree.', 'Variable Juiciness is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is higher than 75%.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The specificity for the presented tree is higher than 90%.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], the Decision Tree presented classifies (not A, not B) as bad.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that KNN algorithm classifies (not A, not B) as bad for any k ≤ 1625.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as good.'] +apple_quality_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] +apple_quality_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] apple_quality_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -apple_quality_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -apple_quality_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] +apple_quality_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] +apple_quality_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] apple_quality_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -apple_quality_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 10 and 30%.'] -apple_quality_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Weight or Ripeness can be discarded without losing information.', 'The variable Juiciness can be discarded without risking losing information.', 'Variables Sweetness and Ripeness seem to be useful for classification tasks.', 'Variables Size and Ripeness are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Ripeness seems to be relevant for the majority of mining tasks.', 'Variables Size and Juiciness seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Size might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Acidity previously than variable Size.'] -apple_quality_boxplots.png;A set of boxplots of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['Variable Ripeness is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Juiciness shows some outliers, but we can’t be sure of the same for variable Sweetness.', 'Outliers seem to be a problem in the dataset.', 'Variable Crunchiness shows a high number of outlier values.', 'Variable Acidity doesn’t have any outliers.', 'Variable Ripeness presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +apple_quality_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 20%.'] +apple_quality_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Crunchiness or Acidity can be discarded without losing information.', 'The variable Ripeness can be discarded without risking losing information.', 'Variables Juiciness and Crunchiness are redundant, but we can’t say the same for the pair Sweetness and Ripeness.', 'Variables Juiciness and Crunchiness are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Juiciness seems to be relevant for the majority of mining tasks.', 'Variables Crunchiness and Weight seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Juiciness might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Juiciness previously than variable Ripeness.'] +apple_quality_boxplots.png;A set of boxplots of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['Variable Weight is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Sweetness shows some outliers, but we can’t be sure of the same for variable Crunchiness.', 'Outliers seem to be a problem in the dataset.', 'Variable Ripeness shows a high number of outlier values.', 'Variable Acidity doesn’t have any outliers.', 'Variable Juiciness presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] apple_quality_class_histogram.png;A bar chart showing the distribution of the target variable Quality.;['Balancing this dataset would be mandatory to improve the results.'] -apple_quality_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -apple_quality_histograms_numeric.png;A set of histograms of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['All variables, but the class, should be dealt with as binary.', 'The variable Ripeness can be seen as ordinal.', 'The variable Acidity can be seen as ordinal without losing information.', 'Variable Sweetness is balanced.', 'It is clear that variable Ripeness shows some outliers, but we can’t be sure of the same for variable Juiciness.', 'Outliers seem to be a problem in the dataset.', 'Variable Ripeness shows some outlier values.', 'Variable Weight doesn’t have any outliers.', 'Variable Juiciness presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Sweetness and Size variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Sweetness variable, dummification would be the most adequate encoding.', 'The variable Juiciness can be coded as ordinal without losing information.', 'Feature generation based on variable Juiciness seems to be promising.', 'Feature generation based on the use of variable Acidity wouldn’t be useful, but the use of Size seems to be promising.', 'Given the usual semantics of Ripeness variable, dummification would have been a better codification.', 'It is better to drop the variable Juiciness than removing all records with missing values.', 'Not knowing the semantics of Crunchiness variable, dummification could have been a more adequate codification.'] -Employee_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition JoiningYear <= 2017.5 and the second with the condition ExperienceInCurrentDomain <= 3.5.;['It is clear that variable Age is one of the four most relevant features.', 'The variable JoiningYear seems to be one of the four most relevant features.', 'The variable ExperienceInCurrentDomain discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Age is the second most discriminative variable regarding the class.', 'Variable ExperienceInCurrentDomain is one of the most relevant variables.', 'Variable JoiningYear seems to be relevant for the majority of mining tasks.', 'Variables JoiningYear and ExperienceInCurrentDomain seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is lower than 75%.', 'The number of False Positives is lower than the number of True Positives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of False Positives reported in the same tree is 10.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], it is possible to state that KNN algorithm classifies (not A, B) as 0 for any k ≤ 44.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 1.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 1215.'] -Employee_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -Employee_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] +apple_quality_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] +apple_quality_histograms_numeric.png;A set of histograms of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Acidity can be seen as ordinal.', 'The variable Size can be seen as ordinal without losing information.', 'Variable Juiciness is balanced.', 'It is clear that variable Weight shows some outliers, but we can’t be sure of the same for variable Sweetness.', 'Outliers seem to be a problem in the dataset.', 'Variable Juiciness shows a high number of outlier values.', 'Variable Size doesn’t have any outliers.', 'Variable Weight presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Crunchiness and Size variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Sweetness variable, dummification would be the most adequate encoding.', 'The variable Juiciness can be coded as ordinal without losing information.', 'Feature generation based on variable Acidity seems to be promising.', 'Feature generation based on the use of variable Acidity wouldn’t be useful, but the use of Size seems to be promising.', 'Given the usual semantics of Acidity variable, dummification would have been a better codification.', 'It is better to drop the variable Ripeness than removing all records with missing values.', 'Not knowing the semantics of Acidity variable, dummification could have been a more adequate codification.'] +Employee_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition JoiningYear <= 2017.5 and the second with the condition ExperienceInCurrentDomain <= 3.5.;['The variable JoiningYear discriminates between the target values, as shown in the decision tree.', 'Variable ExperienceInCurrentDomain is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 60%.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of True Negatives is higher than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 44.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], the Decision Tree presented classifies (A,B) as 0.'] +Employee_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] +Employee_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] Employee_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -Employee_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -Employee_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] +Employee_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] +Employee_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] Employee_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Employee_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 20%.'] -Employee_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables JoiningYear or PaymentTier can be discarded without losing information.', 'The variable PaymentTier can be discarded without risking losing information.', 'Variables ExperienceInCurrentDomain and JoiningYear are redundant.', 'Variables JoiningYear and ExperienceInCurrentDomain are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable PaymentTier seems to be relevant for the majority of mining tasks.', 'Variables ExperienceInCurrentDomain and PaymentTier seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PaymentTier might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Age previously than variable ExperienceInCurrentDomain.'] -Employee_boxplots.png;A set of boxplots of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['Variable ExperienceInCurrentDomain is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable ExperienceInCurrentDomain shows some outliers, but we can’t be sure of the same for variable PaymentTier.', 'Outliers seem to be a problem in the dataset.', 'Variable PaymentTier shows a high number of outlier values.', 'Variable ExperienceInCurrentDomain doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Employee_histograms_symbolic.png;A set of bar charts of the variables ['Education', 'City', 'Gender', 'EverBenched'].;['All variables, but the class, should be dealt with as numeric.', 'The variable EverBenched can be seen as ordinal.', 'The variable EverBenched can be seen as ordinal without losing information.', 'Considering the common semantics for EverBenched and Education variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Gender variable, dummification would be the most adequate encoding.', 'The variable Gender can be coded as ordinal without losing information.', 'Feature generation based on variable Education seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of Education seems to be promising.', 'Given the usual semantics of Education variable, dummification would have been a better codification.', 'It is better to drop the variable Gender than removing all records with missing values.', 'Not knowing the semantics of City variable, dummification could have been a more adequate codification.'] +Employee_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 15 and 25%.'] +Employee_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables PaymentTier or JoiningYear can be discarded without losing information.', 'The variable JoiningYear can be discarded without risking losing information.', 'Variables Age and PaymentTier are redundant, but we can’t say the same for the pair ExperienceInCurrentDomain and JoiningYear.', 'Variables PaymentTier and JoiningYear are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable JoiningYear seems to be relevant for the majority of mining tasks.', 'Variables Age and ExperienceInCurrentDomain seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PaymentTier might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable ExperienceInCurrentDomain previously than variable PaymentTier.'] +Employee_boxplots.png;A set of boxplots of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['Variable ExperienceInCurrentDomain is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable PaymentTier shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable JoiningYear shows a high number of outlier values.', 'Variable JoiningYear doesn’t have any outliers.', 'Variable PaymentTier presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] +Employee_histograms_symbolic.png;A set of bar charts of the variables ['Education', 'City', 'Gender', 'EverBenched'].;['All variables, but the class, should be dealt with as date.', 'The variable Gender can be seen as ordinal.', 'The variable EverBenched can be seen as ordinal without losing information.', 'Considering the common semantics for Education and City variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for City variable, dummification would be the most adequate encoding.', 'The variable City can be coded as ordinal without losing information.', 'Feature generation based on variable City seems to be promising.', 'Feature generation based on the use of variable EverBenched wouldn’t be useful, but the use of Education seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable EverBenched than removing all records with missing values.', 'Not knowing the semantics of Education variable, dummification could have been a more adequate codification.'] Employee_class_histogram.png;A bar chart showing the distribution of the target variable LeaveOrNot.;['Balancing this dataset would be mandatory to improve the results.'] Employee_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Employee_histograms_numeric.png;A set of histograms of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['All variables, but the class, should be dealt with as numeric.', 'The variable JoiningYear can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable PaymentTier is balanced.', 'It is clear that variable JoiningYear shows some outliers, but we can’t be sure of the same for variable ExperienceInCurrentDomain.', 'Outliers seem to be a problem in the dataset.', 'Variable JoiningYear shows a high number of outlier values.', 'Variable JoiningYear doesn’t have any outliers.', 'Variable ExperienceInCurrentDomain presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for PaymentTier and JoiningYear variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for ExperienceInCurrentDomain variable, dummification would be the most adequate encoding.', 'The variable ExperienceInCurrentDomain can be coded as ordinal without losing information.', 'Feature generation based on variable JoiningYear seems to be promising.', 'Feature generation based on the use of variable Age wouldn’t be useful, but the use of JoiningYear seems to be promising.', 'Given the usual semantics of PaymentTier variable, dummification would have been a better codification.', 'It is better to drop the variable JoiningYear than removing all records with missing values.', 'Not knowing the semantics of Age variable, dummification could have been a more adequate codification.'] +Employee_histograms_numeric.png;A set of histograms of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['All variables, but the class, should be dealt with as date.', 'The variable PaymentTier can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable PaymentTier shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable JoiningYear shows some outlier values.', 'Variable ExperienceInCurrentDomain doesn’t have any outliers.', 'Variable PaymentTier presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for JoiningYear and PaymentTier variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PaymentTier variable, dummification would be the most adequate encoding.', 'The variable PaymentTier can be coded as ordinal without losing information.', 'Feature generation based on variable PaymentTier seems to be promising.', 'Feature generation based on the use of variable ExperienceInCurrentDomain wouldn’t be useful, but the use of JoiningYear seems to be promising.', 'Given the usual semantics of PaymentTier variable, dummification would have been a better codification.', 'It is better to drop the variable JoiningYear than removing all records with missing values.', 'Not knowing the semantics of Age variable, dummification could have been a more adequate codification.']