Viewing a single comment thread. View all comments

currentscurrents t1_iye68b8 wrote

Well, fair or not, it's a real challenge for ML since large datasets are hard to collect and expensive to train on.

It would be really nice to be able to learn generalizable ideas from small datasets.

1

Desperate-Whereas50 t1_iye7hf3 wrote

Thats correct. But to define what is the bare minimum, you need a baseline. I just wanted to say that humans are a bad baseline because we have "training data" encoded in our DNA. Further for tabular data ML systems often outperform humans with not as much training data.

But of course less data needed with good training results is always better. I would not argue about that.

Edit: Typos

1