If the previous post was just about understanding gradient descent, this one is about generalizing gradient descent with different input/output nodes.
One of the biggest takeaways from this section, is the idea that measuring the error is extremely important in neural network training.
“If you wish to get good at anything, learn the theory behind what you seek to learn/master”. This wisdom from my grandfather has stayed with me to this day, including in starting this Deep Learning (DL) journey.
One of the goals I have this year is to build a Deep Learning (DL) web application and build an accompanying tutorial series to teach others!
You super-serve and take care of those who benefit the most from what you create, knowing that more opportunity & money will flow naturally from that.