0:00
Hi and welcome back. Here in this lesson you learn that how Neonetwork works
0:05
In our previous lesson we have learned that how Neonetwork is similar to human brain
0:10
And we concluded that Neonetwork is the computational model inspired by the human brain
0:15
which processes the complex information, recognize some patterns and make predictions. We know that Neonetwork consists of interconnected nodes also known as neurons and they are organized into layers
0:28
And these layers are input layer, hidden layer and the output layer
0:32
Here each layers in the neural network plays a very important role
0:37
They perform certain computations into each layers and contribute the overall ability to learn something
0:46
Let's have some example. Suppose you want to build a neural network in order to recognize these two items
0:52
One is orange and another one is tomato. So what we're going to happen
0:57
In our input layer, it will first collect the raw information of this item
1:01
It will be its shape, its texture, its edges, and a lot of other things you can consider
1:08
Okay, so this is how it will go to collect the raw information or you can see the features
1:12
from the item. Here, in the input layer, each neurons represent a pixel in an image
1:19
And in our case it is a case of RGB image it will have a numerical value of the intensity of RGB rightly group okay then we have the hidden layer which comes after input layer where computation works
1:36
Here you're going to extract the information, extract the features and build some patterns, build some recognitions
1:44
So here all things will going to happen in the hidden layers. It is very important to have the hidden layers
1:50
where all those things are going to work over there. Here in the hidden layer the neurons we're going to collect the inputs from the previous layer and
2:01
we're going to have the weighted sum and this weighted sum we're going to decide that that
2:06
neuron should be activated or not and how it's going to decide basically the if the value is near to
2:13
zero it won't be activated the value is near to one it will be activated now the second question
2:19
is that how many uh new how many hidden layers should be
2:24
and how many neurons would be there? See, it totally depends upon the use case of your application
2:29
It depends that what is the complexity of your problem and what is the size of your data set
2:34
These are the factors which we have to consider while putting the numbers of, you know, hidden there in your neural network. Okay
2:43
Now, there might be a lot of, you know, stats is there, like if you want to build some deep neural network
2:49
if you're going to include multiple hidden layers in between, you're going to give an impression
2:53
we're going to give an impressive performance, in the case of especially in the case of this recognizing the objects or you can say in the case of space recognition okay this is all about hidden layer we have other kind of
3:07
layer which is output layer which is the final layer okay where you're going to
3:12
get the result okay so it happens like if you're going to build a binary
3:19
classification then you're going to have singing a neuron and output layer but
3:24
If you're going to build some multi-class classification, in that case, you're going to have single neuron for each classes
3:30
Okay, this is how you will going to have numbers of neurons in the output layer
3:36
Now, there's something which comes a lot of time, which is weight
3:40
So weight is something which is like, these are like, suppose these are two neurons and these are the strength between these neurons
3:49
These are basically called weights. At the initial, we're going to put some random values
3:53
put some random values into the weight but it will be adjusted during the training process
3:57
happen so that it will going to predict the exact thing okay to increase the accuracy
4:03
of our neural network additionally we also have the biases this biases is the extra
4:09
parameter that helps the network make predictions more accurately and it allows the
4:16
network to shift the activation function's output in the desired direction now let us
4:21
also talk about the activation function This activation function introduces the non to the neuron network and they decide whether a neuron should be activated or not Now there is something which is very much important which is called feed forward
4:38
It is a process of passing data through the network where each neurons in a layer receives
4:44
input from the previous layers. Multiply them by bates, apply biases and pass the result through an activation function
4:52
Now let us understand how to train a neural network. Here suppose if you're going to use a large data set it would be divided into two parts
5:01
One is training data set, other one is testing data set. So what we're going to happen, we're going to use the training data in order to adjust
5:08
the network's way and the biases, the learning parameter you can say
5:13
And we're going to make our neural network accurate to predict actual result
5:20
And this testing data will be used to evaluate our neural network, its performance
5:26
So during the training, the network compares its predictions with the actual value and
5:30
then it will going to calculate an error. And this error is then propagated backward through the network using a process called back propagation
5:38
The weight and the biases are adjusted accordingly iteratorily in order to minimize the error
5:45
and improves the network's accuracy. This is how the training works. And this is all about the new network
5:53
It's working. In our next lesson, we're going to learn more about the neural network
5:59
Till then, keep learning, keep exploring, and stay motivated. See you in the next class