All Courses
6 lessons
Math Foundations
The calculus and linear algebra behind every neural net
Lessons
- 01
Gradient Descent
The workhorse optimizer — derive, implement, and visualize it.
EasyOpen - 02
Sigmoid & ReLU
Activation functions, their derivatives, and why ReLU won.
EasyOpen - 03
Softmax
Turn raw logits into calibrated probabilities.
EasyOpen - 04
Cross-Entropy Loss
The canonical classification loss — from KL divergence down.
MediumOpen - 05
Linear Regression (Forward)
Predictions as matrix multiplications.
EasyOpen - 06
Linear Regression (Training)
Closed-form vs iterative — when each wins.
MediumOpen