Viewing a single comment thread. View all comments

TiredOldCrow t1_iv8tqar wrote

I know it's naive to expect machine learning to imitate life too closely, but for animals, "models" that are successful enough to produce offspring pass on elements of those "weights" to their children through nature+nurture.

The idea of weighting more successful previous models more heavily when "reincarnating" future models, and potentially borrowing some concepts from genetic algorithms with respect to combining multiple successful models seems interesting to me.

60

ingambe t1_ivaj6e5 wrote

Evolution strategies work closely to the process you described. For very small neural networks, it works very well especially in environment with sparse or quazi-sparse rewards. But, as soon as you try larger neural net (CNN + MLP, or Transformer-like arch) the process becomes super noisy and you either need to produce a tons of offsprings for the population or use gradient based techniques.

9

life_is_harsh t1_iva1h5l wrote

I feel both are useful, no? I thought of reincarnation as how humans learn: We don't learn from a blank state but often reuse our own learned knowledge or learn from others during our lifetime (e.g., when learning to play a sport, we might learn from an instructor but eventually learn on our own).

7

smallest_meta_review OP t1_iva27vt wrote

While nurture + nature seems useful across lifetimes, reincarnation might be how we learn during our lifetimes? I am not an expert but I found this comment interesting:

> This must be a fundamental part of how primates like us learn, piggybacking off of an existing policy at some level, so I'm all for RL research that tries to formalize ways it can work computationally.

OG Comment

2

DanJOC t1_ivbg48k wrote

Essentially a GAN

−3

veshneresis t1_ivdazlf wrote

What are you seeing as the similarity to a GAN? Not sure I can really see how it’s similar?

1