Comments

You must log in or register to comment.

Denpol88 OP t1_itbir0k wrote

TabPFN is radically different from previous ML methods. It is meta-learned to approximate Bayesian inference with a prior based on principles of causality and simplicity. Here‘s a qualitative comparison to some sklearn classifiers, showing very smooth uncertainty estimates

TabPFN happens to be a transformer, but this is not your usual trees vs nets battle. Given a new data set, there is no costly gradient-based training. Rather, it’s a single forward pass of a fixed network: you feed in (Xtrain, ytrain, Xtest); the network outputs p(y_test).

TabPFN is fully learned: We only specified the task (strong predictions in a single forward pass, for millions of synthetic datasets) but not how it should be solved. Still, TabPFN outperforms decades worth of manually-created algorithms. A big step up in learning to learn

Imagine the possibilities! Portable real-time ML with a single forward pass of a medium-sized neural net (25M parameters).

16

AI_Enjoyer87 t1_itbkffq wrote

When these new models get scaled things are gonna get crazy

11

Kinexity t1_itbwp5m wrote

People who don't have a clue also don't have a clue about most technology. It's not that hard to figure out looking at estimates for compute of human brain that our ML models are very inefficient which is why we got so much gains currently. Current growth in ML is like Moore's Law in semiconductors in 70s - everyone knew back then that there is a lot of room to grow but you could only get there through incremental changes.

6

Ezekiel_W t1_itc49gy wrote

So many amazing AI papers in the last few days, amazing times.

3

NTIASAAHMLGTTUD t1_itd6jci wrote

What does the Y-axis mean here, what would be considered '1' (as opposed to .88)?

1

quinkmo t1_iteq961 wrote

ELI5 should be a sub pre-req

4