Viewing a single comment thread. View all comments

PeedLearning t1_ir2ar52 wrote

Any concrete papers you have in mind?

1

Light991 OP t1_ir2bqnb wrote

Just sort her papers by citations and look at the years…

−5

PeedLearning t1_ir48kps wrote

Yes, MAML is on top. But I don't think it has been very impactful, neither has the whole field of meta-learning really been.

2

carlml t1_ira0vsr wrote

What has been impactful according to you? What makes you say meta learning hasn't been impactful?

1

PeedLearning t1_irbosst wrote

(I have published myself in the meta-learning field, and worked a lot on robotics)

I see no applications of meta learning appearing, outside of self-citations within the field. The SOTA in supervised learning doesn't use any meta-learning. The SOTA in RL neither. The promise of learning to learn never really came true...

... until large supervised language models seemed to suddenly meta-learn as an emergent property.

So not only did nothing in the meta-learning field really take off and had some impact outside of computer science research papers, its original reason of being has been subsumed by a completely different line of research.

Meta-learning is no longer a goal, it's understood to be a side-effect of sufficiently large models.

2

carlml t1_ircfo0x wrote

Are the SOTA in RL for few-shot learning not meta-learning based?

2

PeedLearning t1_irdfrn4 wrote

I am not sure what you would consider SOTA in few-shot RL. The benchmarks I know are quite ad-hoc and don't actually impact much outside of computer science research papers.

The people that work on applying RL for actual applications don't seem to use meta-RL.

2