NOUS
DashboardCoursesUploadAuthoringAnalyticsStudentsSettings
RK
Prof. Ramesh KumarPES University
All Courses
6 lessons

Math Foundations

The calculus and linear algebra behind every neural net

Lessons

  1. 01

    Gradient Descent

    The workhorse optimizer — derive, implement, and visualize it.

    EasyOpen
  2. 02

    Sigmoid & ReLU

    Activation functions, their derivatives, and why ReLU won.

    EasyOpen
  3. 03

    Softmax

    Turn raw logits into calibrated probabilities.

    EasyOpen
  4. 04

    Cross-Entropy Loss

    The canonical classification loss — from KL divergence down.

    MediumOpen
  5. 05

    Linear Regression (Forward)

    Predictions as matrix multiplications.

    EasyOpen
  6. 06

    Linear Regression (Training)

    Closed-form vs iterative — when each wins.

    MediumOpen