NOUS
DashboardCoursesUploadAuthoringAnalyticsStudentsSettings
RK
Prof. Ramesh KumarPES University
All Courses
5 lessons

RNN & LSTM

Sequence modeling before attention — and the problems that motivated it

Lessons

  1. 01

    Recurrent Neural Network

    Hidden state, shared weights, sequential processing.

    MediumOpen
  2. 02

    Backprop Through Time

    Unroll the loop to compute gradients across a sequence.

    HardOpen
  3. 03

    Vanishing Gradient Problem

    Why long sequences kill plain RNNs — analytically.

    HardOpen
  4. 04

    LSTM

    Gates, cell state, and the first real fix for long memory.

    HardOpen
  5. 05

    GRU

    A lighter LSTM that often matches it.

    MediumOpen