Page 313 - AI Computer 10
P. 313

The various components of the model are:

         u Convolution Layer: The first layer of a CNN is the Convolution Layer. The high-level features of an image,
             such as edges, would be extracted with the help of convolution operation. In general, several kernels are
             available in the Convolution layer that are used to generate features from an input image. The output of
             the Convolution layer is called the Feature Map. Sometimes, feature maps are also referred to as Activation
             Maps.
             With the help of a feature map, we can do the following:

             •  Reducing the image size to make processing faster and efficient,
             •   Focusing on several features that can help us in processing the image.
             For example, biometrics or smartphones gather important information by recognising facial features like
             eyes, nose, and mouth instead of scanning the whole face.
         u Rectified  Linear Unit  Function  (ReLu): Rectified  Linear Unit  Function  is  the next layer of  CNN after
             Convolution Layer. Aeature map extracted from the Convolution layer is passed on to the ReLU layer. The
             basic function of this layer is to remove all the negative numbers that exist in a feature map. In other words,
             this layer introduces the concept of non-linearity function in a feature map.
              ReLU is a non-linear activation function that can be commonly used in deep neural networks. This function
             can be represented as:

                  f(x) = max(0,x)........................where, x= value if input
            You may observe that the output of ReLu is the maximum between the two values i.e., zero and input value.
            An output is equal to zero when the input value is negative else, it is positive. Let us understand the concept
            of negative and positive value with the help of an example. Suppose you have a 3 × 3 matrix of an output
            image:

                                                 –2             5             –2
                                                 0              3             –3

                                                 1              4             0

             Here, you have seen that negative values exist in the matrix. These values can be removed by the ReLU layer.
            After removing these values, the output matrix will be:
                                                 0              5             0

                                                 0              3             0

                                                 1              4             0
         u Pooling Layer: The working procedure of a Pooling layer is similar to the Convolutional Layer. The Pooling
             layer is responsible  for reducing the spatial size of the convolved feature map while still  retaining  the
             important features.

            The Pooling layer plays an important role in CNNs because it can perform various kind of tasks like:
             •  making the image smaller and manageable.
             •  enhancing the resistant power of image for small transformations, distortions, and transitions.

            The two types of pooling which can be performed on an image are as follows:
             •   Max Pooling: Max Pooling computes the maximum value or the element from the portion of the image
                covered by the kernel. Thus, the output of max pooling layer is a feature map that contains the most
                prominent features of the previous feature map.
                                                                                                             179
                                                                                                             179
   308   309   310   311   312   313   314   315   316   317   318