Page 229 - AI Computer 10
P. 229
3. F1 Score’s perfect value is ________________.
4. When Precision is high and Recall is high, the F1 Score is ________________.
5. In model evaluation, True Negative is an outcome where the model predicts an actual non-occurrence of
event ________________.
6. The ________________ technique is used to divide data into two subsets: training data and testing data.
7. A model that performs well on training data but poorly on unseen data is said to be ________________.
8. A ________________ matrix is used to summarize the performance of a classification model.
9. In a fraud detection system, it is more important to minimize false negatives, so the key evaluation metric
should be ________________.
10. The goal of model evaluation is to ________________ error and maximize accuracy.
C. State ‘T’ for True or ‘F’ for False statements.
1. Overfitting occurs when a model performs well on test data but poorly on training data.
2. A False Positive indicates a model predicting rain, but there is no rain.
3. Model evaluation helps in understanding the strengths and weaknesses of an AI model.
4. When Precision is high and Recall is high, the F1 Score is high.
5. A confusion matrix is a summary of training data.
6. Precision measures the proportion of predicted positive cases that were actually correct.
7. A high accuracy always indicates a good AI model.
8. Error rate and accuracy are directly proportional to each other.
9. There should be no bias involved in selecting the technique and metric involved in evaluation
of an AI model.
10. Accuracy and F1 score are equally good measures of evaluation of AI models.
D. Very short answer type questions.
1. Why is Precision considered an important evaluation criterion?
2. What is the perfect value for an F1 Score?
3. When is F1 Score considered high?
4. How does the confusion matrix contribute to model evaluation?
5. What does a False Positive outcome signify in model evaluation?
6. In case of predicting unexpected rain, what does a False Positive indicate?
7. Why is it important to use a test dataset that has never been used for training during evaluation?
8. What is the train-test split?
9. Write is the formula for F1 score.
10. What happens if an AI model memorizes the training data?
E. Short answer type questions.
1. Why is Precision considered a crucial evaluation criterion for models?
2. What is the significance of a perfect F1 Score being 1?
3. How does a False Positive outcome impact model evaluation?
4. Why might Accuracy be misleading in certain model evaluation scenarios?
95
95