(for babies) Artificial Neural Networks for kids
This topic contains 2 replies, has 2 voices, and was last updated by God 7 years, 6 months ago.
-
AuthorPosts
-
November 23, 2017 at 2:58 am #6231
“This book is for both ‘kids’, and experts! (This feat was not easy to pull off)
This short book contains what is probably the easiest, most intuitive fun tutorial of how to describe an artificial neural network from scratch. (This short book is a clever and enjoyable yet detailed guide, that doesn’t “dumb down” the neural network literature)
This short book is a chance to understand the whole structure of an elementary, but powerful artificial neural network, just as well as you understand how to write your name.”
Amazon:
https://www.amazon.com/dp/B077FX57ZZFree copy with equations that are nicely coloured differently than their surrounding text content (instead of equations with the same colouring as their surrounding text content)…on research gate:
https://www.researchgate.net/publication/321162382_Artificial_Neural_Nets_For_KidsFree copy on quora:
https://www.quora.com/What-is-the-most-intuitive-explanation-of-artificial-neural-networks/answer/Jordan-Bennett-9Thanks for reading.
November 23, 2017 at 10:34 am #6236Well done Jordan. I have to admit, most of it is beyond me, but at the beginning, it says that the connections between nodes are weights. Isn’t it more accurate to say that the connections are weighted?
November 24, 2017 at 5:41 am #6249Well done Jordan. I have to admit, most of it is beyond me..
All that’s going on, is that:
- We want our computer model to guess what some input is saying.
- That model is a structure of weights, biases and activations, that can hold representations of input in (1). We store these weights and biases in one BIG_MATRIX, and the activations in another.
- Each structure node/neuron has a bias, and is connected to other nodes by weights.
- There are layers of these nodes, an input layer to receive input in the form of numbers, middle layer that acts as extra way to represent input, and an output layer that represents a guess on what the input talking about.
- Weights connect particularly by linking these layers of nodes.
- When we expose the model (2) to inputs (1), the numerical values from the input is passed through the structure, but doesn’t affect the weights and biases, but it does affect the activations or nodes in the structure by transformation functions on the incoming weights to each node, wrt to the bias of each node. (This is called “forward pass”)
- So, the first times we expose the model (2) to inputs (1), it guesses terribly.
- We compute the difference between what the input is actually saying, and what the model guessed. (This is the error). We store these error signals aka gradients aka changes in weights and biases in a <b><i>cost matrix C</i></b> of weight and bias changes.
- When we’ve computed how each weight and bias changes, our cost matrix becomes filled with gradients, which are really values telling how to nudge our weights and biases in a way that generates better guessing skills of our model.
- Remember from (6) that each node has an activation. Each activation/neuron in our output layer is a potential guess about what the input is saying. The neuron with the highest activation corresponds to the what the neural net is trying to guess. In the case of a digit detector, we have 9 output neurons, 0-9, and whichever neuron has the highest value shows what numeral the digit detector is guessing some input to be.
- So, now that we have our cost matrix of nudges, we nudge the weights and biases in BIG_MATRIX, by adding the cost matrix C, to BIG_MATRIX. Negative values in C will decrease its corresponding BIG_MATRIX value, and positive values will increase BIG_MATRIX value.
- Repetitive generation of “nudges“, and application of those nudges to our weights and biases, influences all our activations (including the ones in the output layer), which are actually our guesses.
▼
▼Simon Paynton wrote:
… but at the beginning, it says that the connections between nodes are weights. Isn’t it more accurate to say that the connections are weighted?Reading the steps 12 steps above, (especially item 2) you see that we literally have a sequence of weights which are adjusted as the model learns. So, I don’t detect an issue with labeling these connections as weights, because they are quite literally the weights that are perturbed as the model learns.
-
AuthorPosts
You must be logged in to reply to this topic.