(for babies) Artificial Neural Networks for kids

Homepage Forums Science (for babies) Artificial Neural Networks for kids

This topic contains 2 replies, has 2 voices, and was last updated by  God 3 years, 4 months ago.

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #6231

    God
    Participant

    “This book is for both ‘kids’, and experts! (This feat was not easy to pull off)

    This short book contains what is probably the easiest, most intuitive fun tutorial of how to describe an artificial neural network from scratch. (This short book is a clever and enjoyable yet detailed guide, that doesn’t “dumb down” the neural network literature)

    This short book is a chance to understand the whole structure of an elementary, but powerful artificial neural network, just as well as you understand how to write your name.”

     

    Amazon:
    https://www.amazon.com/dp/B077FX57ZZ

    Free copy with equations that are nicely coloured differently than their surrounding text content (instead of equations with the same colouring as their surrounding text content)…on research gate:
    https://www.researchgate.net/publication/321162382_Artificial_Neural_Nets_For_Kids

    Free copy on quora:
    https://www.quora.com/What-is-the-most-intuitive-explanation-of-artificial-neural-networks/answer/Jordan-Bennett-9

     

    Thanks for reading.

    #6236

    Simon Paynton
    Participant

    Well done Jordan.  I have to admit, most of it is beyond me, but at the beginning, it says that the connections between nodes are weights.  Isn’t it more accurate to say that the connections are weighted?

    #6249

    God
    Participant

    Well done Jordan. I have to admit, most of it is beyond me..

    All that’s going on, is that:

    1. We want our computer model to guess what some input is saying.
    2. That model is a structure of weights, biases and activations, that can hold representations of input in (1). We store these weights and biases in one BIG_MATRIX, and the activations in another.
    3. Each structure node/neuron has a bias, and is connected to other nodes by weights. 
    4. There are layers of these nodes, an input layer to receive input in the form of numbers, middle layer that acts as extra way to represent input, and an output layer that represents a guess on what the input talking about.
    5. Weights connect particularly by linking these layers of nodes.
    6. When we expose the model (2) to inputs (1), the numerical values from the input is passed through the structure, but doesn’t affect the weights and biases, but it does affect the activations or nodes in the structure by transformation functions on the incoming weights to each node, wrt to the bias of each node. (This is called “forward pass”)
    7. So, the first times we expose the model (2) to inputs (1), it guesses terribly.
    8. We compute the difference between what the input is actually saying, and what the model guessed. (This is the error). We store these error signals aka gradients aka changes in weights and biases in a <b><i>cost matrix C</i></b> of weight and bias changes.
    9. When we’ve computed how each weight and bias changes, our cost matrix becomes filled with gradients, which are really values telling how to nudge our weights and biases in a way that generates better guessing skills of our model.
    10. Remember from (6) that each node has an activation. Each activation/neuron in our output layer is a potential guess about what the input is saying. The neuron with the highest activation corresponds to the what the neural net is trying to guess. In the case of a digit detector, we have 9 output neurons, 0-9, and whichever neuron has the highest value shows what numeral the digit detector is guessing some input to be.
    11. So, now that we have our cost matrix of nudges, we nudge the weights and biases in BIG_MATRIX, by adding the cost matrix C, to BIG_MATRIX. Negative values in C will decrease its corresponding BIG_MATRIX value, and positive values will increase BIG_MATRIX value.
    12. Repetitive generation of “nudges“, and application of those nudges to our weights and biases, influences all our activations (including the ones in the output layer), which are actually our guesses.


    Simon Paynton wrote:
    … but at the beginning, it says that the connections between nodes are weights. Isn’t it more accurate to say that the connections are weighted?

    Reading the steps 12 steps above, (especially item 2) you see that we literally have a sequence of weights which are adjusted as the model learns. So, I don’t detect an issue with labeling these connections as weights, because they are quite literally the weights that are perturbed as the model learns.

     

     

Viewing 3 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.