Viewing a single comment thread. View all comments

gibs t1_itnnx1y wrote

Just read the first part -- that is a super interesting approach. I'm convinced that robust continual learning is a critical component for AGI. It also reminds me of another of Lex Fridman's podcasts where he had a cognitive scientist guy (I forget who) whose main idea about human cognition was that we have a collection of mini-experts for any given cognitive task. They compete (or have their outputs summed) to give us a final answer to whatever the task is. The paper's approach of automatically compartmentalising knowledge into functional components I think is another critical part of the architecture for human-like cognition. Very very cool.

2