Submitted by mrx-ai t3_zjud5l in MachineLearning
AsIAm t1_izx39lx wrote
Reply to comment by tysam_and_co in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
His take on hardware for neural nets is pretty forward(-forward) thinking. Neural nets started by being analog (Rosenblatt's' Perceptron) and only later we started simulating them in software on digital computers. Some recent research (1,2) suggest that physical implementation of learnable neural nets is possible and way more efficient in analog circuits. This means that we could run extremely large nets on a tiny chip. Which could live in your toaster, or your skull.
IshKebab t1_izys7ni wrote
The trouble with analogue is that it's not repeatable. Have fun debugging your code when it changes every time you run it.
I mean, I'm sure it's possible... It definitely doesn't sound pleasant though.
modeless t1_izzpcbe wrote
He calls it "mortal computation". Like instead of loading identical pretrained weights into every robot brain you actually train each brain individually, and then when they die their experience is lost. Just like humans! (Except you can probably train them in simulation, "The Matrix"-style.) But the advantage is that by relaxing the repeatability requirement you get hardware that is orders of magnitude cheaper and more efficient, so for any given budget it is much, much more capable. Maybe. I tend to think that won't be the case, but who knows.
ChuckSeven t1_j016rtg wrote
Why exactly is hardware cheaper and more efficient?
modeless t1_j02fiss wrote
Without the requirement for exact repeatability you can use analog circuits instead of digital, and your manufacturing tolerances are greatly relaxed. You can use error-prone methods like self assembly instead of EUV photolithography in ten billion dollar cleanrooms.
Again, I don't really buy it but there's an argument to be made.
Viewing a single comment thread. View all comments