Viewing a single comment thread. View all comments

fasttosmile t1_iqrel80 wrote

This is wrong see: https://www.youtube.com/watch?v=kcVWAKf7UAg

The real reason is it's just faster to train on smaller batches (because the steps are quicker).

2

ClearlyCylindrical t1_iqrmrxz wrote

Yes that too, although my explanation wasn't incorrect, there was just more needed to the explanation right?

1

fasttosmile t1_iqrolwa wrote

There was for a while the belief that the stochasticity was key for good performance (one paper supporting the hypothesis from 2016). Your framing makes it sound like that is still the case - you suggest no other reason for not doing full batch descent - and I think it's important to point out it's not.

1