Viewing a single comment thread. View all comments

currentscurrents OP t1_j34uma6 wrote

>That alone I doubt it, even if it could theoretically reproduce how the brain works with the same power efficiency it doesn't mean you would have the algorithm to efficiently use this hardware.

I meant just in terms of compute efficiency, using the same kind of algorithms we use now. It's clear they won't magically give you AGI, but Innatera claims 10000x lower power usage with their chip.

This makes sense to me; instead of emulating a neural network using math, you're building a physical model of one on silicon. Plus, SNNs are very sparse and an analog one would only use power when firing.

>Usual ANNs are designed for current tasks and current tasks are often designed for usual ANNs. It's easier to use the same datasets but I don't think the point of SNNs is just to try to perform better on these datasets but rather to try more innovative approaches on some specific datasets.

I feel like a lot of SNN research is motivated by understanding the brain rather than being the best possible AI. It also seems harder to get traditional forms of data into and out of the network, like you have to convert images into spike timings - for which there are several methods each with downsides and upsides.

2

visarga t1_j36ccg4 wrote

> Innatera claims 10000x lower power usage with their chip.

Unfortunately it's just a toy. Not gonna run GPT-3 on edge.

Googled for you: Innatera's third-generation AI chip has 256 neurons and 65,000 synapses and runs inference at under 1 milliwatt, which doesn't sound like a lot compared to the human brain, which has 86 billion neurons and operates at around 20 watts.

4

currentscurrents OP t1_j39hde8 wrote

Not bad for a milliwatt of power though - an arduino idles at about 15 milliwatts.

I could see running pattern recognition in a battery-powered sensor or something.

2

IntelArtiGen t1_j358s2v wrote

>I meant just in terms of compute efficiency, using the same kind of algorithms we use now.

For SNNs I'm sure they can make them more efficient but that doesn't mean it'll have a better ratio score/power_cons on a task than more standard models in their most optimized versions.

>This makes sense to me; instead of emulating a neural network using math, you're building a physical model of one on silicon. Plus, SNNs are very sparse and an analog one would only use power when firing.

I understand and I can't disagree but as I said, we don't have the proof that the way we're usually doing it (with dense layers / tensors) is necessarily less efficient than artificial SNNs or biologicial NNs. "Efficient" in terms of accuracy / power consumption. And we don't have a theory that would allow a generic comparison between usual ANNs and SNNs or Biological NNs, it would require a generic metric of how "intelligent" these models can be just because of their design (we don't have that). Neurons in usual ANNs don't represent the same thing.

Also, an optimized model on a modern GPU can run resnet50 (fp16) at ~2000 fps with 450W, we can't directly compare fps with human vision but if the brain works with 20W, it's equivalent to approximately 90 fps for 20W (if you say 7W are for vision, it's 30fps). Of course we don't see at 30fps and it's hard to compare the accuracy of resnet50 with humans, but resnet50 is also very far from being the most efficient architecture and there are also more power efficient GPUs. It's hard to say for sure that current GPUs with SOTA models would be less power efficient on some tasks than the human brain.

>I feel like a lot of SNN research is motivated by understanding the brain rather than being the best possible AI.

It depends on what you call the "best possible AI". It's probably not designed to be a SOTA on usual tasks but the best way to prove that you can understand the human brain is by reproducing how it works, which would make the resulting model better than current models on a lot of tasks.

2