Submitted by itsstylepoint t3_xxkgp2 in MachineLearning

Hi folks,

stylepoint here.

I have released the YouTube series discussing and implementing activation functions.

Videos:

GitHub: https://github.com/oniani/ai

Some notes about the series:

  • In every video, I discuss the activation function before implementing it.
  • In every video, I compute/derive the derivative/gradient of the activation function.
  • In every video, I provide two implementations for the activation function - manual and using PyTorch's autograd engine.
  • In every video, I use gradcheck to test the implementation.
  • Every video has timestamps, so you can skip parts that are not of interest.
  • There is not a lot of interdependence across the videos, so you can watch some and skip others.

Hope y'all will enjoy these vids!

51

Comments

You must log in or register to comment.

Gemabo t1_ircq1tp wrote

Bookmarked!

1

mister-guy-dude t1_ircwrn7 wrote

This is great! Thanks for creating it!

1

itsstylepoint OP t1_ircxamq wrote

Hey thanks for the kind words!
Will be making more AI/ML YouTube series in the future - in fact, working on one as we speak!

1

_chyld t1_irdmjiv wrote

Excellent, I'll check them out.

1

awebb78 t1_irdxw4o wrote

Sounds very interesting

1

Erosis t1_irewcom wrote

Your videos have been great so far! Can't wait for more modeling content.

1

itsstylepoint OP t1_irft9uh wrote

Thank you!
Yup, that is the plan! Will likely make a few more series (about gradient descent, optimizers, etc.) first. We need these for DL and if someone asks how things work, I could then cite the appropriate video series. After that, will dive into deep learning.

2

pm_me_your_ensembles t1_irjbb1t wrote

Do you go over numerical stability issues?

1

itsstylepoint OP t1_irjh4n3 wrote

Yup, all implementations are numerically stable.

Note that I do not discuss numerical stability issues for all activation functions, but for those where the intuitive implementation is not numerically stable (i.e., Sigmoid, Tanh).

I also have a separate video discussing numerical stability: AI/ML Model API Design and Numerical Stability (follow-up). But this is in the context of Gaussian Naive Bayes.

1

CeFurkan t1_irmeyx2 wrote

how are you able to program in a such fashion that without doing any debugging and you are sure they are working correctly as intended?

1