Member-only story
Why Backpropagation Is So Important For Models In Machine Learning
How gradients drive learning in neural networks | Data Science for Machine Learning Series (5)
If you are not a paid member on Medium, I make my stories available for free: Friends link
In my last article, I talked about the unsung heroes behind every deep learning model:
Optimizers, loss functions, and gradients.
Remember that these are the backstage crew that quietly makes the learning process work. If you’ve ever used model.compile()
and wondered what those parameters actually do, I recommend reading that post first (especially if you’re not yet comfortable with the basics).
But today… I’ll be going more in-depth.
Specifically, I’ll focus into how gradients and loss functions fuel the process of backpropagation, which is the powerful algorithm that updates weights inside your neural network. With that, I’ll walk through examples using CNNs, complete with calculations.
But first, let’s kick off with the foundation. Gradients!