Submitted by currentscurrents t3_104admo in MachineLearning
IntelArtiGen t1_j33v5ir wrote
>Is the performance really better than GPUs?
Depends on the model I guess, usual ANNs work with tensors so you probably can't do much better than GPUs (/TPUs).
>Could this achieve the the dream of running a model on as little power as the brain uses?
That alone I doubt it, even if it could theoretically reproduce how the brain works with the same power efficiency it doesn't mean you would have the algorithm to efficiently use this hardware. Perhaps GPUs could actually be more efficient that a human brain in theory with a perfect algorithm but we don't have that algorithm and we don't have the proof it can't exist.
>Are spiking neural networks useful for anything?
I've read papers that said they do work, but papers I've read use it on the same tasks we use for usual ANNs and they perform worse (for what I've seen). Perhaps it's also a bad idea to test them on the same tasks. Usual ANNs are designed for current tasks and current tasks are often designed for usual ANNs. It's easier to use the same datasets but I don't think the point of SNNs is just to try to perform better on these datasets but rather to try more innovative approaches on some specific datasets. Biological neurons use time for their action potential so if you want to reproduce their behavior it's probably better to test them on videos / sounds which also depend on time.
It would say it's useful for researchers who have ideas. Otherwise I'm not sure. And if you have an idea I guess it's better to first try it on usual hardware and only use neuromorphic chips if you're sure they'll run faster and improve the results.
The hardware is not the only limit, if I gave an AI researcher a living human brain, this researcher probably couldn't make AGI out of it. You also need the good algorithms.
currentscurrents OP t1_j34uma6 wrote
>That alone I doubt it, even if it could theoretically reproduce how the brain works with the same power efficiency it doesn't mean you would have the algorithm to efficiently use this hardware.
I meant just in terms of compute efficiency, using the same kind of algorithms we use now. It's clear they won't magically give you AGI, but Innatera claims 10000x lower power usage with their chip.
This makes sense to me; instead of emulating a neural network using math, you're building a physical model of one on silicon. Plus, SNNs are very sparse and an analog one would only use power when firing.
>Usual ANNs are designed for current tasks and current tasks are often designed for usual ANNs. It's easier to use the same datasets but I don't think the point of SNNs is just to try to perform better on these datasets but rather to try more innovative approaches on some specific datasets.
I feel like a lot of SNN research is motivated by understanding the brain rather than being the best possible AI. It also seems harder to get traditional forms of data into and out of the network, like you have to convert images into spike timings - for which there are several methods each with downsides and upsides.
visarga t1_j36ccg4 wrote
> Innatera claims 10000x lower power usage with their chip.
Unfortunately it's just a toy. Not gonna run GPT-3 on edge.
Googled for you: Innatera's third-generation AI chip has 256 neurons and 65,000 synapses and runs inference at under 1 milliwatt, which doesn't sound like a lot compared to the human brain, which has 86 billion neurons and operates at around 20 watts.
currentscurrents OP t1_j39hde8 wrote
Not bad for a milliwatt of power though - an arduino idles at about 15 milliwatts.
I could see running pattern recognition in a battery-powered sensor or something.
IntelArtiGen t1_j358s2v wrote
>I meant just in terms of compute efficiency, using the same kind of algorithms we use now.
For SNNs I'm sure they can make them more efficient but that doesn't mean it'll have a better ratio score/power_cons on a task than more standard models in their most optimized versions.
>This makes sense to me; instead of emulating a neural network using math, you're building a physical model of one on silicon. Plus, SNNs are very sparse and an analog one would only use power when firing.
I understand and I can't disagree but as I said, we don't have the proof that the way we're usually doing it (with dense layers / tensors) is necessarily less efficient than artificial SNNs or biologicial NNs. "Efficient" in terms of accuracy / power consumption. And we don't have a theory that would allow a generic comparison between usual ANNs and SNNs or Biological NNs, it would require a generic metric of how "intelligent" these models can be just because of their design (we don't have that). Neurons in usual ANNs don't represent the same thing.
Also, an optimized model on a modern GPU can run resnet50 (fp16) at ~2000 fps with 450W, we can't directly compare fps with human vision but if the brain works with 20W, it's equivalent to approximately 90 fps for 20W (if you say 7W are for vision, it's 30fps). Of course we don't see at 30fps and it's hard to compare the accuracy of resnet50 with humans, but resnet50 is also very far from being the most efficient architecture and there are also more power efficient GPUs. It's hard to say for sure that current GPUs with SOTA models would be less power efficient on some tasks than the human brain.
>I feel like a lot of SNN research is motivated by understanding the brain rather than being the best possible AI.
It depends on what you call the "best possible AI". It's probably not designed to be a SOTA on usual tasks but the best way to prove that you can understand the human brain is by reproducing how it works, which would make the resulting model better than current models on a lot of tasks.
aibler t1_j367heq wrote
Other than 'memory in compute' and being asynchronous, what would you say are the other major differences between neuromorphic and traditional processors?
IntelArtiGen t1_j36nsv9 wrote
I think there are multiple kinds of "neuromorphic" processors and they all have different abilities. OP pointed out the power efficiency. Researchers also work on analog chips which don't have the same constraints as traditional circuits.
But how / if you can truly use some of these differences depend on the use case, it would seem logical that well-exploited neuromorphic processors would be more power efficient, but it doesn't mean you have the algorithm to exploit it better than current processors for your use case, or that it's necessarily true. For complex tasks, we don't have a proof that would say "No algorithm on a traditional processor can outpeform the best algorithm we would know on a neuromorphic chip for the same power efficiency".
The main difference is that neuromorphic chips are still experimental, and that traditional chips allowed 10+ years of very fast progress in AI.
aibler t1_j38jis7 wrote
Very interesting, thanks so much for the explanation. Should be interesting to see how this develops!
[deleted] t1_j367emn wrote
[deleted]
Viewing a single comment thread. View all comments