Viewing a single comment thread. View all comments

arhetorical t1_izwxay5 wrote

I mostly hear about surrogate gradient descent, what other methods work well in practice?

4

aleph__one t1_izwyrcf wrote

Yea the surrogate gradient stuff works ok, others that are decent 1) STDP variants, especially dopamine modulated STDP (emulates RL-like reinforcement) 2) for networks < 10M params, evolution strategies and similar zero-order solvers can work well operating directly on the weights 3) variational solvers can work if you structure the net + activations appropriately

12

arhetorical t1_izxbkdf wrote

I see, thanks. Why did you choose to use SNNs for your application instead of conventional ANNs? Are you using a neuromorphic chip?

1

aleph__one t1_izxu46b wrote

No neuromorphic chip. Main reason is interpretability.

2

arhetorical t1_izzryk4 wrote

Oh, I haven't heard about using SNNs for interpretability. I thought they were on the same level as ANNs. Sorry for all the questions, but can you elaborate on how they're more interpretable?

2

2358452 t1_j04t3pw wrote

The spiking events should be much more sparse and therefore probably easier to interpret.

1