Introduction to Deep Learning: What Are Convolutional Neural Networks?

Introduction to Deep Learning: What Are Convolutional Neural Networks?

December 12, 2019 19 By Stanley Isaacs


A convolutional Neural Network or Cn n is a network architecture for deep learning it learns directly from images a CnN is made up of several layers that process and transform an input to produce an output You can train a cnn to do image analysis tasks including scene classification object Detection and segmentation and image processing in order to understand How cnn’s work we’ll cover three key concepts? local receptive fields shared weights and biases and activation and pooling Finally we’ll briefly discuss the three ways [to] train Cnn’s for image analysis So let’s start with the concept of local receptive fields in a typical neural network each neuron in the input layer is Connected to a neuron in the hidden layer however in a CnN Only a small region of input layer neurons connect to neurons in the hidden layer These regions are referred to as local receptive fields The local Receptive field is translated across an image to create a feature map from the input layer to the hidden layer neurons You can use convolution to implement this process efficiently That’s why it is called a convolutional Neural Network the second concept we’ll discuss is about shared weights and biases like [a] typical neural network a CnN has neurons with weights and biases The model learns these values during the training process and it continuously updates them with each new training example However in the case of Cnn’s the weights and bias values are the same for all? Hidden neurons in a given layer this means that all hidden neurons are detecting the same feature Such as an edge or a blob in different regions of the image this makes the Network tolerant to translation of objects in an image for example a Network trained to recognize cats will be able to do so Wherever the cat is in the image our third and final concept is activation and pooling the activation Step apply the transformation to the output of each neuron by using activation functions Rectified linear unit or relu is an example of a commonly used activation function it takes the output of a [neuron] and maps it to the highest positive value or If the output is negative the function [maps] it to zero You can further transform the output of the activation step by applying a pooling step Pooling reduces the dimensionality of the features map by condensing the output of small regions of neurons into a single output This helps simplify the following layers and reduces the number of parameters that the model needs to learn now. Let’s pull it all together using these three concepts we can configure the layers in a cnn a CnN can have tens or hundreds of hidden [layers] they need to learn to detect different features in an image in this feature map? We can see that every hidden layer increases the complexity of the learned image features for example the first hidden layer learns how to detect edges and the last learns how to detect more complex shapes Just like in a typical neural network [the] final layer connects every neuron from the last hidden layer to the output neurons This produces the final output there are three ways to use [Cnn’s] for image analysis The first method is to [train] the Cnn from scratch this method is highly accurate although it is also the most challenging if you might need hundreds of thousands of [labelled] images and significant computational resources the second method relies on Transfer learning Which is based on the idea that you can use knowledge of one type of problem to solve a similar problem For example you could use a Cnn model that has been trained to recognize Animals to initialize and train a new model that differentiates between cars and trucks This method requires less Data and fewer computational resources than the first With the third method you can use a pre trained Cnn to extract features for training a machine learning Model for example a hidden layer that has learned how to detect edges in an image is broadly relevant to images from many different domains This method requires the least amount of data and computational resources. [I] hope you found this video useful For more information with it. Matt works calm [Slash] deep learning you