Page 223 - AI Computer 10
P. 223
9. What is the perfect value for an F1 Score?
a. 0 b. 0.5
c. 0.1 d. 1
10. In which scenario might Accuracy alone be insuffi cient for model evaluati on?
a. Predicti ng fl ood occurrence b. Predicti ng water shortage in schools
c. Predicti ng unexpected rain d. Predicti ng the outcome of a sports event
Answers
1. (b) 2. (b) 3. (a) 4. (b) 5. (c) 6. (a)
7. (c) 8. (a) 9. (d) 10. (b)
B. Fill in the blanks.
1. Model evaluati on is the process of understanding the _________________ of an AI model by using a test
dataset that was not part of the training data.
2. In model evaluati on, the term _________________ refers to the output generated by the machine.
3. The two terms used to determine the effi ciency of a model are _________________ and
_________________.
4. _________________ is an outcome where the model correctly predicts positi ve reality.
5. _________________ is the rati o of the correct number of predicti ons to the total number of predicti ons.
6. Precision is the rati o of True Positi ves to the sum of True Positi ves and _________________.
7. Recall is the rati o of True Positi ves to the sum of True Positi ves and _________________.
8. F1 Score is the harmonic mean of _________________ and _________________.
9. If Precision is high and Recall is low, then the F1 Score is _________________.
10. _________________ may not be enough to ensure the model’s performance on data that has never
been used.
Answers
1. reliability 2. Prediction 3. Prediction, Reality
4. True Positive 5. Model Accuracy 6. False Positives
7. False Negatives 8. Precision, Recall 9. low
10. Accuracy
C. State ‘T’ for True or ‘F’ for False statements.
1. The primary purpose of model evaluati on is to train the AI model.
2. In model evaluati on, the term “Reality” represents the training dataset.
3. False Positi ve is an outcome where the model predicts an event to occur but in reality, the event
does not occur.
4. A confusion matrix is a matrix used for evaluati ng the performance of a model based on three
parameters: Predicti on, Reality, and Probability.
5. Model Accuracy is calculated as the rati o of all correct predicti ons to the total number of
predicti ons.
6. Precision is the rati o of True Positi ves to the sum of True Positi ves and False Negati ves.
89
89