The is very small neural net (2 node input layer, 2 node hidden layer, and 1 node output layer) that is intended to be a simple demonstration of how to implement the backpropagation learning algorithm. The implementation is based based off of the network diagram below.
Assume a learning rate of 1 (if you dont know what this is, reference below or don't worry about it for now). We will be using the sigmoid activation function to process inputs and turn them into outputs at the hidden layer and the output layer:
input to top neuron:
input to bottom neuron:
output of top neuron:
output of bottom neuron:
input to final neuron:
output of final neuron:
Calculate error for hidden layer
Calculate new weights for hidden layer
Using the sigmoid activation function:
Input pattern is applied and the output is calculated
Calculate the input to the hidden layer neurons
Feed inputs of hidden layer neurons through the activation function
Multiply the hidden layer outputs by the corresponding weights to calculate the inputs to the output layer neurons
Error of each neuron is calculated and the error is used to mathematically change the weights to minimize them, repeatedly.
_ = subscript, W+ = new weight, W = old weight, δ = error, η = learning rate.
Calculate (back-propagate) hidden layer errors
Change hidden layer weights
A chapter from this book

