Submitted by mrx-ai t3_zjud5l in MachineLearning
arhetorical t1_izwxay5 wrote
Reply to comment by aleph__one in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
I mostly hear about surrogate gradient descent, what other methods work well in practice?
aleph__one t1_izwyrcf wrote
Yea the surrogate gradient stuff works ok, others that are decent 1) STDP variants, especially dopamine modulated STDP (emulates RL-like reinforcement) 2) for networks < 10M params, evolution strategies and similar zero-order solvers can work well operating directly on the weights 3) variational solvers can work if you structure the net + activations appropriately
[deleted] t1_izxdazr wrote
[deleted]
arhetorical t1_izxbkdf wrote
I see, thanks. Why did you choose to use SNNs for your application instead of conventional ANNs? Are you using a neuromorphic chip?
aleph__one t1_izxu46b wrote
No neuromorphic chip. Main reason is interpretability.
arhetorical t1_izzryk4 wrote
Oh, I haven't heard about using SNNs for interpretability. I thought they were on the same level as ANNs. Sorry for all the questions, but can you elaborate on how they're more interpretable?
2358452 t1_j04t3pw wrote
The spiking events should be much more sparse and therefore probably easier to interpret.
Viewing a single comment thread. View all comments