Page 225 - AI Computer 10
P. 225
4. How does a confusion matrix contribute to evaluati ng model performance?
Ans. A confusion matrix is important because direct comparisons of values such as True Positi ve, False Positi ve,
True Negati ve, and False Negati ve can be made that helps in calculati ng the various evaluati on metrics.
5. Briefl y describe how Precision is calculated in model evaluati on.
Ans. Precision is calculated as the rati o of True Positi ves to the sum of True Positi ves and False Positi ves,
emphasising accurate positi ve predicti ons.
6. What does a False Negati ve indicate in the context of model evaluati on?
Ans. A False Negati ve occurs when the model predicts an outcome/event to not occur, but the actual event/
outcome occurs, highlighti ng a potenti al area for improvement.
7. Defi ne F1 Score and its signifi cance in evaluati ng models.
Ans. F1 Score is the harmonic mean of Precision and Recall, off ering a balanced measure of a model’s
performance, considering both false positi ves and false negati ves.
F. Long answer type questions.
1. Explain the importance of using a separate test dataset for model evaluati on.
Ans. Using a separate test dataset ensures that the model is assessed on data it has never encountered during
training, providing insights into its ability to generalise and perform well on new, unseen instances. This
process helps gauge the model’s reliability and eff ecti veness in real-world scenarios beyond its training
data.
2. How does the term “Reality” contribute to understanding the performance of an AI model, especially in
the context of fl ood predicti ons?
Ans. “Reality” represents the actual outcome/event that occurs for which the AI model makes an predicti on.
It helps in eveluati ng how correct or incorrect the model is in predicti ng an outcome, based on data
provided to it. In fl ood predicti ons, it indicates whether there is an actual fl ood or not. Evaluati ng the
model based on this reality helps measure its accuracy and reliability, ensuring that predicti ons align
with the true conditi ons of the environment.
3. Discuss the signifi cance of True Positi ves in model evaluati on and provide an example scenario where it
holds parti cular importance.
Ans. True Positi ves signify instances where the model correctly predicts an actual event. In scenarios like
medical diagnoses or disaster predicti ons, a True Positi ve means the model accurately identi fi ed a
conditi on or event, showcasing its capability to make correct positi ve predicti ons, which is crucial for
decision-making in such criti cal situati ons.
4. Explain on the role of a confusion matrix in evaluati ng the performance of an AI model.
Ans. A confusion matrix is a table that summarises the model’s predicti on results as compared to actual events
that occurred, in terms of True Positi ves, True Negati ves, False Positi ves, and False Negati ves. It serves
as a valuable tool for understanding the distributi on of outcomes and the model’s overall eff ecti veness,
aiding in assessing its strengths and areas for improvement.
5. How is Precision calculated, and why is it considered a criti cal parameter in model evaluati on?
Ans. Precision is calculated as the rati o of True Positi ves to the sum of True Positi ves and False Positi ves. It
is crucial because it measures the accuracy of positi ve predicti ons. High precision indicates that the
model is making fewer false positi ve predicti ons, which is parti cularly important in applicati ons where
inaccurate positi ve predicti ons can have signifi cant consequences, such as predicti ng fl oods or water
shortages.
91
91