Baturinsky t1_jdvwtlg wrote

Actually, Matrix could be the case of the relatively GOOD AI. Yes, it disempowers the humanity, but keeps us safe, gives us entertainment and an illusion of purpose.

Also, looks like in none of the good/optimistic scenarios singularity has happened.


Baturinsky t1_jb6v5pm wrote

I haven't noticed any improvement in memory requirements for 5 months on Stable Diffusion... My RTX2060 still is enough for 1024x640, but not more.

LLaMA does good on tests on small models, but small size could make it not as fit for RLHF.

There is also miniaturisation for inference by reducing precision to int8 or even4. But that does not fit for training, and I believe AGI requires real-time training.

So, in theory, AGI could be achieved even without big "a-ha"-s. Take existing training methods, train on many different domains and data architectures, add tree earch from AlphaGo and real time training - and we probably will be close. But it would require pretty big hardware. And would be "only" superhuman in some specific domains.


Baturinsky t1_j9xstbp wrote

Problem with that approach is that 1. we don't know how to do that reliably and 2. by the time AGI will be invented, it will likely to be able run on home computer or network of those, and there will be someone evil or reckless enough to run it without the handlebars.