Online Courses
Free Tutorials  Go to Your University  Placement Preparation 
Artificial Intelligence(AI) & Machine Learning(ML) Training in Jaipur
Online Training - Youtube Live Class Link
0 like 0 dislike
695 views
in RTU B.Tech (CSE-VI Sem) Machine Learning Lab by Goeduhub's Expert (3.1k points)
Build an Artificial Neural Network by implementing the Backpropagation algorithm and test the same using appropriate data sets.

Goeduhub's Online Courses @Udemy

For Indian Students- INR 570/- || For International Students- $12.99/-

S.No.

Course Name

Apply Coupon

1.

Tensorflow 2 & Keras:Deep Learning & Artificial Intelligence

Apply Coupon

2.

Computer Vision with OpenCV | Deep Learning CNN Projects

Apply Coupon

3.

Complete Machine Learning & Data Science with Python Apply Coupon

4.

Natural Language Processing-NLP with Deep Learning in Python Apply Coupon

5.

Computer Vision OpenCV Python | YOLO| Deep Learning in Colab Apply Coupon

6.

Complete Python Programming from scratch with Projects Apply Coupon

1 Answer

0 like 0 dislike
by Goeduhub's Expert (3.1k points)
edited by
 
Best answer

BackPropagation 

  • Backpropagation  is supervised learning algorithm , for training Neural Networks.
  • Every node in Neural Network represent a Neuron, so we can say that Neural Network is a circuit of neurons, 
  • Neural Network consist an Input layer, an output layer and a hidden layer, let's see in diagram.

What is the Role of Backpropagation

  1. First of all,if I want to create a neural network, then I have to initialize some weights.
  2. Now, whatever values i have selected for weights i do not know how much they are correct.
  3. To check that the weight values that ​​I have selected are correct or incorrect I have to calculate the error of the model.
  4. Suppose my model error occurred too much
  5. Meaning my predicated output is very different from the actual output, so what shall I do? I will try to minimize the error.

Note: 

  1. Here we are trying to minimize our error , how we will do this?
  2. What we really want to do is we have to learn our model to change the weights automatically so that we can get least error.
  3. As shown in the above diagram, we first calculated the error of our model, after that we saw that if the error is minimal then our model is ready for prediction.
  4. If the error is not minimized, we will update the parameters (weights) and calculate the error again.
  5. These processes will run until the error of our model is minimized.

Gradient Descent

  1. We have number of optimizer but here we are using Gradient descent optimizer.
  2. Gradient descent work as a optimizer, for finding minimum of a function.
  3. In our case we update the weights using gradient descent and try to minimize error function.

Note: Achieving Global Loss Minimum is Backpropagation

How does back propagation algorithm work?

Suppose we have a neural network that has an input layer, a hidden layer and an output layer

step1: First, we give random weights to the model.

step2: Forward propagation (normal neural network calculation)

step3: Calculate total error.

step4: Backward propagation (gradient descent), updating parameters (weights and bias)

step5: Until the error is minimized (Predicted output to be approximately equal to original output)

The formulas that we are using here 

FORWARD PROPAGATION

1. To calculate value of h1 

2. To  calculate the output of h1

3. To calculate error of output of h1 

4. To calculate total error of the model 

Now will propagate Backward 

BACKWARD PROPAGATION

  1. Here we are writing the process and formulas to update our  w5  weight. 
  2. For that, we should know how much total error has come with respect to w5 weight.

1. Calculating our total total error with respect to output one.

2. calculating our total output 1 with respect to net output 1 

3. Calculate net output1 with respect to weight5 

4. Calculating updated weight 

Similarly we can calculate other weight values as well  (All this process is happened behind in the model)

How is back-propagation implemented?

Initializing variables value

import numpy as np

x = np.array(([2, 9], [1, 5], [3, 6]), dtype=float)

print("small x",x)

#original output 

y = np.array(([92], [86], [89]), dtype=float)

X = X/np.amax(X,axis=0) #maximum along the first axis 

print("Capital X",X)

Output

#Defining Sigmoid Function for output 

def sigmoid (x):

    return (1/(1 + np.exp(-x)))

#Derivative of Sigmoid Function

def derivatives_sigmoid(x):

    return x * (1 - x)

#Variables initialization

epoch=7000 #Setting training iterations

lr=0.1 #Setting learning rate

inputlayer_neurons = 2 #number of input layer neurons 

hiddenlayer_neurons = 3 #number of hidden layers neurons

output_neurons = 1 #number of neurons at output layer

Note:

  1. In this code,we have defined  sigmoid function and its derivative function.
  2. As you know, we train the Neural network many times at a single point, for that we need the number of epochs.
  3. Below that we have defined the only  number of neurons in each layer.

#Defining weight and biases for hidden and output layer 

wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))

bh=np.random.uniform(size=(1,hiddenlayer_neurons))

wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))

bout=np.random.uniform(size=(1,output_neurons))

Note: 

  1. Here we have defined random weights and bias
  2. As we know, we should first defined the wights and Bias for the first (here we have only one hidden layer) hidden layer.
  3. After that we have defined the weights and bias for the output layer.
  4. Keep in mind when defining the weights size (how many neurons are in the previous layer, the number of neurons in the layer for that we have defined weights).
  5. Size of bias (number of neurons in output layer,the number of neurons in the layer for that we have defined biases).

#Forward Propagation 

for i in range(epoch):

    hinp1=np.dot(X,wh) 

    hinp=hinp1 + bh

    hlayer_act = sigmoid(hinp)

    outinp1=np.dot(hlayer_act,wout)

    outinp= outinp1+ bout

    output = sigmoid(outinp)

Note:

  1. Here we are just calculating output of our model, first we have done this for hidden layer and after that for output layer , and finally get the output.
  2. np.dot is used for dot product of two matrix.

#Backpropagation Algorithm 

EO = y-output

outgrad = derivatives_sigmoid(output)

d_output = EO* outgrad

EH = d_output.dot(wout.T)

hiddengrad = derivatives_sigmoid(hlayer_act)

#how much hidden layer wts contributed to error

d_hiddenlayer = EH * hiddengrad

wout += hlayer_act.T.dot(d_output) *lr

# dotproduct of nextlayererror and currentlayerop

bout += np.sum(d_output, axis=0,keepdims=True) *lr

#Updating Weights

wh += X.T.dot(d_hiddenlayer) *lr

print("Actual Output: \n" + str(y))

print("Predicted Output: \n" ,output)

Output

Note: 

  1. In this code first we calculated error of output layer and after that calculated error of output layer.
  2. As we know from the formula we have to find out  how much hidden layer contribute in total error and also contribution of weight in total error.
  3. After that we have updated our weights and biases, until we get minimum error.
  4. X.T is used to make transpose matrix.

Click here for more programs of  RTU ML LAB

Our Mentors(For AI-ML)


Sharda Godara Chaudhary

Mrs. Sharda Godara Chaudhary

An alumna of MNIT-Jaipur and ACCENTURE, Pune

NISHA (IIT BHU)

Ms. Nisha

An alumna of IIT-BHU

 Goeduhub:

About Us | Contact Us || Terms & Conditions | Privacy Policy || Youtube Channel || Telegram Channel © goeduhub.com Social::   |  | 
...