Page 217 - AI Computer 10
P. 217

u False Negative (FN): The predicted negative value does not match the actual positive value. For example,
             you predict that the team does not win the match, but it wins the match. This is also known as Type 2 Error.

        Parameters to Evaluate an AI Model

        The confusion matrix allows us to understand the prediction results. It is not an evaluation metric but a record
        that can help in evaluation. We need some more ways to numerically evaluate the model. The commonly used
        methods of model evaluation are Accuracy, Precision, Recall, and F1 Score.

























        Before we discuss each metric, let us understand the following:
        Number of correct predictions = TP + TN
        Total number of predictions = TP + TN + FP + FN

        Total positive cases = TP + FN

        Now, let us learn about the various evaluation metrics.
         u Accuracy: The Accuracy of a model is defined as the ratio of the correct number of predictions and the total
             number of predictions, usually expressed as a percentage. Thus, the formula of accuracy is:
                                                             TP + TN
                                             Accuracy =                   × 100
                                                        TP + TN + FP + FN

             Let us understand the term ‘Accuracy’ with the help of the flood example. Suppose, the model always
             predicts that there is no flood. But in reality, there is a 10% chance of flooding.
             In this case, the model predicted correctly for 90 cases, but for 10 cases, the model prediction is incorrect
             i.e., no flood.
             Thus, TP = 0, TN = 90, Total cases = 100
             Therefore,

                                                 Accuracy =  0 + 90  = 0.9 or 90%
                                                              100
             This method is good for correct prediction, but didn’t consider the real scenario where the probability of
             flooding is 10% . Thus, accuracy alone may not be enough to ensure the model’s performance on the data
             which has never been used.
         u Precision: The term ‘Precision’ can be defined as the ratio of number of True Positives to the number of
             all positive predictions, usually expressed as percentage. Thus, the formula of Precision can be written as:




                                                                                                              83
                                                                                                              83
   212   213   214   215   216   217   218   219   220   221   222