Submitted by FrereKhan t3_11zg5rr in MachineLearning
Comments
CommunismDoesntWork t1_jdcloqz wrote
Is there specialized hardware for SNNs yet?
FrereKhan OP t1_jdcn3b6 wrote
Yes, a few options. Rockpool is designed to work with SNN chips from SynSense (https://synsense.ai ). Intel has Loihi, there is also Akida from BrainChip…
CommunismDoesntWork t1_jdcpcv9 wrote
Are those chips general purpose SNN accelerators in the same way GPUs are general purpose NN accelerators? If so, what's stopping someone from creating a 100B parameter SNN similar to LLMs?
FrereKhan OP t1_jdcvobu wrote
Sort of yes; Xylo is a general-purpose SNN accelerator, but the scale is for smaller problems, in the order of 1000 neurons.
But in principle there's nothing standing in the way of building a 100B parameter SNN.
CommunismDoesntWork t1_jdcxx5u wrote
>But in principle there's nothing standing in the way of building a 100B parameter SNN.
That's awesome. In that case, I'd pivot my research if I were you. These constrained optimization problems on limited hardware are fun and I'm sure they have some legitimate uses, but LLMs have proven that scale is king. Going in the opposite direction and trying to get SNNs to scale to billion of parameters might be world changing.
Because NNs are only going to get bigger and more costly to train. If SNNs and their accelerators can speed up training and ultimately reduce costs, that would be massive. You could be the first person in the world to create a billion parameter SNN. Once you show the world that it's possible, the flood gates will open.
Art10001 t1_jdd0ag1 wrote
Brainchip has 1 million neurons already. Loihi and Loihi2 similar.
Art10001 t1_jdd0ihw wrote
Great. Neuromorphic technology is genius (or at least cool) and very underappreciated.
KerfuffleV2 t1_jdd5b3d wrote
Have you already seen this? https://github.com/ridgerchu/SpikeGPT
CommunismDoesntWork t1_jdd87tx wrote
I haven't, that's really cool though!
FrereKhan OP t1_jdc5qes wrote
This manuscript describes the details for neuron and synapse simulations for a mixed-signal neuromorphic spiking NN device, as well as the training, quantisation and deployment pipeline.
The idea is to build SNN applications, trained using gradient methods, that are robust against the mismatch exhibited by mixed-signal devices in general. By including a detailed trainable simulation of the neuron and synapse models, as well as trainable hardware-verified parameter mismatch models, you can perform backprop training of SNNs that are still functional when deployed to hardware, without per-chip calibration or tweaking.
Previously it was very difficult to build functioning SNNs for these devices, requiring lots of hand-tweaking and/or device calibration. With these new tools the aim is to train once, then deploy at scale to many chips, with some guarantees about performance degradation.
We've integrated these tools into the open-source deep SNN library Rockpool (https://rockpool.ai).
Manuscript abstract:
Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads, taking advantage of sparse asynchronous computation within Spiking Neural Networks (SNNs). However, deploying robust applications to these devices is complicated by limited controllability over analog hardware parameters, unintended parameter and dynamics variations of analog circuits due to fabrication non-idealities. Here we demonstrate a novel methodology for offline training and deployment of spiking neural networks (SNNs) to the mixed-signal neuromorphic processor Dynap-SE2. The methodology utilizes an unsupervised weight quantization method to optimize the network's parameters, coupled with adversarial parameter noise injection during training. The optimized network is shown to be robust to the effects of quantization and device mismatch, making the method a promising candidate for real-world applications with hardware constraints. This work extends Rockpool, an open-source deep-learning library for SNNs, with support accurate simulation of mixed-signal SNN dynamics. Our approach simplifies the development and deployment process for the neuromorphic community, making mixed-signal neuromorphic processors more accessible to researchers and developers.