Comments

You must log in or register to comment.

PeedLearning t1_ir29bk8 wrote

Chelsea Finn? I knew very little people who were using MAML during her Phd, and even fewer after.

I reckon e.g. Ian Goodfellow had a lot of impact during his Phd. Alex Krizhevsky is another name with big impact.

6

haowanr t1_ir2jbcn wrote

Jonathan Frankle, the guy behind lottery ticket hypothesis seems quite impressive imo.

5

rando_techo t1_ir2ze94 wrote

It's nice to meet you, CHELSEA FINN!

4

k3iter t1_ir31d5r wrote

Robin Rombach? The guy literally fueled a 1B unicorn

2

snekslayer t1_ir3i0o1 wrote

(Non-ml) Gerard 't Hooft who’s got a Nobel prize for his phd work?

3

fromnighttilldawn t1_ir53pvu wrote

Not to rag on Chelsea Finn, who I am certain is brilliant but,

  1. I can't understand her. She falls into the classic ML research habit of defining very specialized new terms or new frameworks without bothering to explain it in the context of what everybody else is already familiar with.
  2. In the same vein, basically no comment on how their methods compete with say, traditional methods. You are doing a robot grasping, I'm not sure if you are the first one to do this.
  3. All the robot money comes from DARPA and Office of Naval Research, which I guess comes from killing brown people overseas? Yes...DARPA/ONR is part of America's big weapon's industry. They literally have .mil in their website names.
−1

fromnighttilldawn t1_ir549x1 wrote

But ADAM paper was wrong, so. It is no better than cooking up an equation, which I guess is impressive, but if you know the right people then the overall contribution is very low. Like ADAM was literally 1 or 2 steps away from whatever Hinton was doing, and Hinton was literally the co-author's (forgot his name) supervisor or something.

−2

fromnighttilldawn t1_ir54kie wrote

I keep on trying to find one thing that she published that's considered to be successful and I find myself having to very narrowly define success. This was when I was doing a general survey on RL techniques.

2

PeedLearning t1_ir70vku wrote

Hundreds of Phd students were in their positions. Few made such an impact.

It's not easy to be impactful, even with a good supervisor. In Krizhevsky's case, one could even argue he had a big impact, despite having Hinton as a supervisor. Alexnet was kind of built behind Hinton's back as he didn't approve of the research direction. Hinton did turn around later and recognize the importance though.

1

badabummbadabing t1_ir9bv9x wrote

It did have an error in their convergence proof (which was later rectified by other people). But

  • this was only applicable to convex cost functions anyway (General convergence proof in this sense are impossible for general nonconvex problems like neural net training)
  • Adam is literally the most used optimiser for neural network training, it would be crazy to deny its significance due to a technical error in a proof in an irrelevant (for this application) regime

Regarding "whatever Hinton was doing": Are you talking about RMSprop? Sure, it's another momentum optimizer. There are many of them.

1

PeedLearning t1_irbosst wrote

(I have published myself in the meta-learning field, and worked a lot on robotics)

I see no applications of meta learning appearing, outside of self-citations within the field. The SOTA in supervised learning doesn't use any meta-learning. The SOTA in RL neither. The promise of learning to learn never really came true...

... until large supervised language models seemed to suddenly meta-learn as an emergent property.

So not only did nothing in the meta-learning field really take off and had some impact outside of computer science research papers, its original reason of being has been subsumed by a completely different line of research.

Meta-learning is no longer a goal, it's understood to be a side-effect of sufficiently large models.

2

PeedLearning t1_irdfrn4 wrote

I am not sure what you would consider SOTA in few-shot RL. The benchmarks I know are quite ad-hoc and don't actually impact much outside of computer science research papers.

The people that work on applying RL for actual applications don't seem to use meta-RL.

2