Submitted by theanswerisnt42 t3_10wtumf in MachineLearning

I came across a few comments on this community about researchers developing AI algorithms inspired by ideas from neuroscience/cognition. I'd like to know how successful this approach has been in terms of coming up with new perspectives on problems.

What are some of the key issues researchers are trying to address this way? What are some future directions in which research may progress?

I have a rough idea that this could be one way to inspire sample efficient RL but I'd love to hear about other work that goes on in this area

8

Comments

You must log in or register to comment.

katadh t1_j7pv6f1 wrote

Look into spiking neural networks if you're not aware of them already

3

wintermute93 t1_j7pxlsj wrote

Have spiking networks actually produced any meaningful results? Granted, the last time I looked into the field was like 5 years ago, but back then the answer was definitely "no, these are just a toy".

2

currentscurrents t1_j7q8q5v wrote

So far nobody's figured out a good way to train them.

You can't easily do backprop, but you wouldn't want to anyway - the goal of SNNs is to run on ultra-low-power analog computers. For this you need local learning, where neurons can learn by communicating only with adjacent neurons. There's some ideas (forward-forward learning, predictive coding, etc) but so far nothing is as good as backprop.

There's a bit of a chicken-and-egg problem too. Without a good way to train SNNs, there's little interest in the specialized hardware - and without the hardware, there's little interest in good ways to train them. You can emulate them on regular computers but that removes all their benefits.

3

katadh t1_j7s68c6 wrote

There has been a lot of progress in the last 2 - 3 years. They're still not quite at the level of ANNs in general but have been gaining ground quickly and do outperform ANNs on some specific tasks -- usually things with a temporal component but low data dimensionality per time-step. Another area with comparable results to ANNs would be object detection.

1

katadh t1_j7s73hw wrote

SNN - ANN conversion and surrogate gradient methods can both get good results these days, so training has become a lot more comparable to ANNs than it was in the past. I would agree though that there is a disconnect between the hardware and software still which is preventing SNNs from reaching the dream of super low power models.

1

EyeSprout t1_j7sqjzc wrote

CNNs and some very early optimizations for them that used to be kind of useful but are no longer really needed anymore since our computers are now faster (like Gabor functions) are sort of inspired from neuroscience research. Attention mechanisms were also floating around for quite a bit in neuroscience in models of memory and retrieval before it was sort of streamlined and simplified into the form we see today.

In general, when things go from neuroscience to machine learning, it takes a lot of stripping down of things into the actually relevant and useful components before they become actually workable. Neuroscientists have lot of ideas for mechanisms, but not all of them are useful...

3

currentscurrents t1_j7sri62 wrote

SNN-ANN conversion is kludge - not only do you have to train an ANN first, it means your SNN is incapable of learning anything new.

Surrogate gradients are better! But they're still non-local and require backwards passes, which means you're missing out on the massive parallelization you could achieve with local learning rules on the right hardware.

Local learning is the dream, and would have benefits for ANNs too: you could train a single giant model distributed across an entire datacenter or even multiple datacenters over the internet. Quadrillion-parameter models would be technically feasible - I don't know what happens at that scale, but I'd sure love to find out.

2

leventov t1_j7ubimw wrote

Top AI researchers (Yoshua Bengio, Yann LeCun) are essentially cognitive scientists. By "cognitive science", I mean here general theories of cognition, not human cognition. If you watch any recent talk by Bengio (example), you recognise that it's a talk about cognitive science at least as much as it is about AI. From his talks, you could also roughly sense the type of problems these researchers are solving when they move to the level of thinking about cognitive science.

Theories of cognitive science and ML/DL form an "abstraction-grounding" stack:
general theories of cognition (intelligence, agency) ->
general theories of DNN working in runtime ->
interpretability theories for a concrete DNN architecture.

1

katadh t1_j7x7924 wrote

There's been a decent amount of work showing that they should be much more energy efficient. There is some empirical work showing other potential advantages (like robustness) but most of that work is still too nascent to be definitive.

1