Comments
Akimbo333 t1_irkw2s6 wrote
MachineDrugs t1_irlgt13 wrote
So Bill was right
Tobislu t1_irllk8q wrote
The catch is that technology is just as easily used for evil as good.
Now deep-learning doesn't need to go through well-known channels, because it can be run locally on cheap hardware. Now these inferences can be used by anyone with a flip-phone or a hacked microwave.
Professional-Song216 t1_irlnrv4 wrote
Bill who?
fumblesmcdrum t1_irlp1zy wrote
Bill stickers
Chomperzzz t1_irlplpo wrote
Bill stickers is innocent!
HofvarpnirStudios t1_irlvexn wrote
Power asymmetry can lead to nefarious use as well
As in only those with massive GPUs and corner the market or something like that
genshiryoku t1_irmxqml wrote
There seems to be forming a consensus that we don't need any better technology than we already have right now.
If for some reason hardware stops today and no new things will ever get made, it's possible that with the right architectural/software breakthroughs we could still reach AGI.
Yeah moore's law is most likely going to end around the end of this decade, but we have more than enough processing power for the AGI revolution to still happen.
_____DEAD_____ t1_irpamzq wrote
Gonna be interesting to see general appliances having the ability to "learn", give it a camera so it can see the world too, just dont forget to clean the microwave before it microwaves you
_____DEAD_____ t1_irqlbpd wrote
I agree that it’s more a programming problem, but with increasing technology comes increased speeds and available memory, which absolutely benefits development of these programs.
Quealdlor t1_irrr1tx wrote
But do you know how hard it is to run modern games on Ultra settings in 4K 60fps? Let alone higher framerates. We need better hardware or some software breakthroughs. DLSS looks horrible btw.
GenoHuman t1_is15mcm wrote
4090 can easily run any modern game in 4K 60fps, in fact it can run most games at 4K 100+ fps with ray tracing.
Dr_Singularity OP t1_irkrmtd wrote
The memory requirement is greatly diminished because of their system-algorithm co-design approach. When compared to cloud training frameworks, the suggested methods significantly reduce memory use by over a factor of 1000 and a factor of 100 compared to the best edge training framework can discover (MNN).
This framework saves energy and encourages practical use by decreasing the per-iteration time by more than 20 compared to dense update and vanilla system design. Their findings show that small IoT devices may make inferences, learn from experience, and acquire new skills over time.