Viewing a single comment thread. View all comments

Affectionate_Log999 t1_irokzmv wrote

I think what people want most to see is actual implementation of those stuff, not just going through the paper and explaining math.

6

Professional-Ebb4970 t1_irpktsf wrote

Depends on the person, there's probably many people who prefer the general theoretical aspects too

14

carlml t1_irpnot6 wrote

I second this. Moreover, a lot of people do implementation, whereas very few (if any) go over the theory.

2

Fun_Wolverine8333 OP t1_irqk0mi wrote

Initially my idea was to just make purely theoretical videos. But I think that in some topics (where there is a clear algorithm for example) it might be helpful to show some Python implementation. Even in that case, I prefer the video overall to be more math focused. So going forward, I will add code if it seems necessary to complete the explanation. For example, in causality, the concept of avg. treatment effects can be explained through theory, but a concrete Python example will make it much clear for anyone watching what exactly is happening.

1

RezaRob t1_isdj8u0 wrote

I saw a debate, I think on Stack Exchange, about why people use pseudocode.

In a situation like this, good pseudocode is much better than Python, probably. It lasts forever, is applicable to all languages, and anyone can read it if it's well written.

2

Fun_Wolverine8333 OP t1_irolvfd wrote

Thanks for the suggestion, I might try to put in some simple implementations of the ideas I am presenting (in Python) depending on the concept.

3

todeedee t1_irq0p5r wrote

Disagree -- the logic behind Bayesian estimators is extremely finicky. It took me fucking *years* to wrap my head around Variational Inference and I still don't have a great intuition why MCMC works. If the theory checks out, the implementation is pretty straightforward.

1

RezaRob t1_isdkb67 wrote

Speaking only in general here: often in ML, we don't know exactly why things work theoretically. Even for something like convolutional neural networks, I'm not sure if we have a complete understanding of "why" they work, or what happens internally. There have certainly been papers which brought into question our assumptions about how these things work. Adversarial images are a good example of things that we wouldn't have expected. So, in ML, sometimes the method/algorithm, and whether it works, are more important than an exact theoretical understanding of what's happening internally. You can't argue with superhuman alphago performance.

1