Week 4 - Backpropagation
In simple language, backpropagation is the point when the function has maximum values. this can be obtained using first derivative equal to zero and when second derivative is used to find the point. See Example below.
Consider this function
f(x) = 10 x*x*x + 20 (x) + 20
Step 2 : Find the derivative of the function , we differentiate each term separately using the power rule for derivatives:
- For the term , the derivative is .
- For the term , the derivative is .
- For the constant term , the derivative is , since the derivative of a constant is zero.
Therefore, the derivative of the function is:
To find the maximum value of the function , we can use calculus.
First, let's find the critical points by setting the derivative equal to zero:
Solving for :
So, the critical points are and .
Now, let's analyze the second derivative to determine if these critical points are maxima, minima, or points of inflection:
When or , will be positive, indicating that these are points of local minima.
Since the function is increasing without bound as approaches positive infinity and decreasing without bound as approaches negative infinity, there is no maximum value for this function. Instead, it has local minima at and .
Above function was created by me and is going sky-bound. Hahaha
Here is one more example :
Let's consider the function .
To find the critical points, we'll take the derivative of and set it equal to zero:
Factoring out , we get:
Setting each factor equal to zero:
From the first equation, we get .
From the second equation, we get .
Now, let's analyze the second derivative to determine if these critical points are maxima, minima, or points of inflection:
When , , indicating a maximum point.
When , , indicating minimum points.
So, the maximum and minimum values of the function are both 9, and they occur at and , respectively.
However, in AI the function are non-linear and polynomial is maximum value using same concept but using limits function of derivate. The process of backpropagation is commonly associated with neural networks and machine learning, particularly in the context of training neural networks through gradient descent.
Backpropagation is specifically applied in the context of neural networks to update the weights of the network based on the error between the predicted output and the actual output during the training process.
However, if you were using this function as part of a neural network, we could apply backpropagation to calculate the gradients of the loss function with respect to the parameters (weights and biases) of the network, and then use those gradients to update the parameters through gradient descent.
In summary, while backpropagation is not directly applicable to the function itself, it could be used in the context of a neural network that incorporates this function as part of its architecture.
Welcome to world of artificial neuron made by human brain using real neuron !!!
No comments:
Post a Comment