Page 211 - AI Computer 10
P. 211

Model Evaluation provides feedback that guides the iterative development process in AI projects. Data scientists
        can make informed decisions based on model performance metrics and refine their approaches by identifying
        strengths and addressing weaknesses, ultimately leading to more effective and reliable AI systems.
        Evaluation is crucial for AI models because it determines their reliability, accuracy, and ability to generalise to
        unseen data, ensuring they perform effectively and ethically in real-world applications.

        MODEL EVALUATION TECHNIQUES

        Model Evaluation techniques are essential for assessing the performance of machine learning models. They
        help determine how well a model generalises to unseen data, guiding improvements and ensuring reliability.
        Depending on the type of model and the specific goals of the analysis, various techniques can be utilised.

        Train-Test Split Method

        The Train-Test Split method is a fundamental technique used in machine learning to assess the performance of
        a model. This approach involves dividing the available dataset into two separate subsets - one for training the
        model and the other for testing its performance.
        This method is appropriate when there is a sufficiently large dataset available as it provides a clear framework
        for testing how well a model can generalise to unseen data, thereby assisting in the model selection process.


















        Recall, that a Training dataset consists of data that is used to train an AI model, whereas a Test dataset refers to
        data that is provided to the model to evaluate its performance.
        Need for Train-Test Split Method

        The train-test split method plays an important role in machine learning to ensure that models are effective,
        generalisable, and reliable. It serves as a safeguard against overfitting, provides a structured approach for model
        evaluation, and assists in the selection of the most suitable model for deployment. By creating a clear distinction
        between training and testing datasets, data scientists can build and validate models that are equipped to perform
        well in real-world scenarios.
        Overfitting in machine learning model refers to the phenomenon when a model memorises the training data
        too well and is not able to make correct predictions when provided a fresh set of unknown data. Overfitting
        usually occurs when the model trains on a dataset for too long, hence memorising it in the process, or when the
        model becomes too specialised in the training dataset and loses its ability to generalise and properly analyse
        new datasets.


              Knowledge Botwledge Bot
              Kno
          At the time of evaluation, we should never test the model with datasets which were used for training the
          model as it will always predict the correct label or output for such datasets, and hence, cannot be evaluat-
          ed satisfactorily.


                                                                                                              77
                                                                                                              77
   206   207   208   209   210   211   212   213   214   215   216