Submitted by currentscurrents t3_104admo in MachineLearning

There are a number of companies out there making special-purpose chip "neuromorphic" architectures that are supposed to be better suited for neural networks. Some of them you can buy for as little as $500.

Most of them are designed for Spiking Neural Networks, probably because of the similarity to the human brain. Innatera's chip implements the neural network on an analog computer, which I find very interesting.

  • Is the performance really better than GPUs? Could this achieve the the dream of running a model on as little power as the brain uses?

  • Are spiking neural networks useful for anything? I don't know of any tasks where a SNN is the current state-of-the-art in performance.

All the good results right now seem to be coming out of transformers, but maybe that's just because they're so well-suited for the hardware we have available.

20

Comments

You must log in or register to comment.

IntelArtiGen t1_j33v5ir wrote

>Is the performance really better than GPUs?

Depends on the model I guess, usual ANNs work with tensors so you probably can't do much better than GPUs (/TPUs).

>Could this achieve the the dream of running a model on as little power as the brain uses?

That alone I doubt it, even if it could theoretically reproduce how the brain works with the same power efficiency it doesn't mean you would have the algorithm to efficiently use this hardware. Perhaps GPUs could actually be more efficient that a human brain in theory with a perfect algorithm but we don't have that algorithm and we don't have the proof it can't exist.

>Are spiking neural networks useful for anything?

I've read papers that said they do work, but papers I've read use it on the same tasks we use for usual ANNs and they perform worse (for what I've seen). Perhaps it's also a bad idea to test them on the same tasks. Usual ANNs are designed for current tasks and current tasks are often designed for usual ANNs. It's easier to use the same datasets but I don't think the point of SNNs is just to try to perform better on these datasets but rather to try more innovative approaches on some specific datasets. Biological neurons use time for their action potential so if you want to reproduce their behavior it's probably better to test them on videos / sounds which also depend on time.

It would say it's useful for researchers who have ideas. Otherwise I'm not sure. And if you have an idea I guess it's better to first try it on usual hardware and only use neuromorphic chips if you're sure they'll run faster and improve the results.

The hardware is not the only limit, if I gave an AI researcher a living human brain, this researcher probably couldn't make AGI out of it. You also need the good algorithms.

7

currentscurrents OP t1_j34uma6 wrote

>That alone I doubt it, even if it could theoretically reproduce how the brain works with the same power efficiency it doesn't mean you would have the algorithm to efficiently use this hardware.

I meant just in terms of compute efficiency, using the same kind of algorithms we use now. It's clear they won't magically give you AGI, but Innatera claims 10000x lower power usage with their chip.

This makes sense to me; instead of emulating a neural network using math, you're building a physical model of one on silicon. Plus, SNNs are very sparse and an analog one would only use power when firing.

>Usual ANNs are designed for current tasks and current tasks are often designed for usual ANNs. It's easier to use the same datasets but I don't think the point of SNNs is just to try to perform better on these datasets but rather to try more innovative approaches on some specific datasets.

I feel like a lot of SNN research is motivated by understanding the brain rather than being the best possible AI. It also seems harder to get traditional forms of data into and out of the network, like you have to convert images into spike timings - for which there are several methods each with downsides and upsides.

2

visarga t1_j36ccg4 wrote

> Innatera claims 10000x lower power usage with their chip.

Unfortunately it's just a toy. Not gonna run GPT-3 on edge.

Googled for you: Innatera's third-generation AI chip has 256 neurons and 65,000 synapses and runs inference at under 1 milliwatt, which doesn't sound like a lot compared to the human brain, which has 86 billion neurons and operates at around 20 watts.

4

currentscurrents OP t1_j39hde8 wrote

Not bad for a milliwatt of power though - an arduino idles at about 15 milliwatts.

I could see running pattern recognition in a battery-powered sensor or something.

2

IntelArtiGen t1_j358s2v wrote

>I meant just in terms of compute efficiency, using the same kind of algorithms we use now.

For SNNs I'm sure they can make them more efficient but that doesn't mean it'll have a better ratio score/power_cons on a task than more standard models in their most optimized versions.

>This makes sense to me; instead of emulating a neural network using math, you're building a physical model of one on silicon. Plus, SNNs are very sparse and an analog one would only use power when firing.

I understand and I can't disagree but as I said, we don't have the proof that the way we're usually doing it (with dense layers / tensors) is necessarily less efficient than artificial SNNs or biologicial NNs. "Efficient" in terms of accuracy / power consumption. And we don't have a theory that would allow a generic comparison between usual ANNs and SNNs or Biological NNs, it would require a generic metric of how "intelligent" these models can be just because of their design (we don't have that). Neurons in usual ANNs don't represent the same thing.

Also, an optimized model on a modern GPU can run resnet50 (fp16) at ~2000 fps with 450W, we can't directly compare fps with human vision but if the brain works with 20W, it's equivalent to approximately 90 fps for 20W (if you say 7W are for vision, it's 30fps). Of course we don't see at 30fps and it's hard to compare the accuracy of resnet50 with humans, but resnet50 is also very far from being the most efficient architecture and there are also more power efficient GPUs. It's hard to say for sure that current GPUs with SOTA models would be less power efficient on some tasks than the human brain.

>I feel like a lot of SNN research is motivated by understanding the brain rather than being the best possible AI.

It depends on what you call the "best possible AI". It's probably not designed to be a SOTA on usual tasks but the best way to prove that you can understand the human brain is by reproducing how it works, which would make the resulting model better than current models on a lot of tasks.

2

aibler t1_j367heq wrote

Other than 'memory in compute' and being asynchronous, what would you say are the other major differences between neuromorphic and traditional processors?

2

IntelArtiGen t1_j36nsv9 wrote

I think there are multiple kinds of "neuromorphic" processors and they all have different abilities. OP pointed out the power efficiency. Researchers also work on analog chips which don't have the same constraints as traditional circuits.

But how / if you can truly use some of these differences depend on the use case, it would seem logical that well-exploited neuromorphic processors would be more power efficient, but it doesn't mean you have the algorithm to exploit it better than current processors for your use case, or that it's necessarily true. For complex tasks, we don't have a proof that would say "No algorithm on a traditional processor can outpeform the best algorithm we would know on a neuromorphic chip for the same power efficiency".

The main difference is that neuromorphic chips are still experimental, and that traditional chips allowed 10+ years of very fast progress in AI.

1

aibler t1_j38jis7 wrote

Very interesting, thanks so much for the explanation. Should be interesting to see how this develops!

1

Glitched-Lies t1_j3r89e2 wrote

I just bought one from Brainchip. They seem pretty good. I asked them some of their use cases, they have some videos on their YouTube on classification tasks of images of beer bottles, but they seem to be the same kind of tasks you can do on a regular GPU.

Brainchip PCI chip is interesting because you can code for them like regularly, and then send the built neural network to the chip and convert it from a CNN into a SNN, but there doesn't seem to be a great reason to use it this way. It seems like the main use case would be to run a native SNN on it. NPUs don't seem to scale the way GPUs do though either.

2

TurnipAppropriate360 t1_j64kfeb wrote

Go straight to Brainchips website and look at their AKIDA NSoC and IP - the tech is there and they’re already beginning to commercialise.

AI will be as big for investors in the next 2-5 years as the internet was in the 90’s.

1