Viewing a single comment thread. View all comments

Veei t1_iylnlcg wrote

I’m definitely with you in that first half you describe. I have very little faith in humanity, especially in those who have the resources to successfully create AGI and beyond. I doubt their innovations would be made accessible to those without much means (if government agencies didn’t seize it first). I’ve heard some discussions around the thought that we should (or predict we will) stick to ANI (single/narrow purpose AI that we have now). Then there’s many experts that say it’s highly unlikely an AI would choose to help humans. Our track record is not good. Humans are so utterly prone to corruption when given the opportunity. It’s in our nature to define an “other” and divide ourselves. We’re selfish, tribal, conflict-driven pricks.

My guess is most want to (or must) believe in the coming of ASI and LEV simply due to the overwhelming fear of death. I’m in that camp though my cynicism keeps my hope from being anything other than a child’s wishing on a star.

1

humanefly t1_iyo5v4x wrote

I would expect AGI functionality to trickle down and become kind of a widely available commodity in time. The back end could live in the cloud with an interface available via cell phone.

I think we can have some control over some AI and design them to be human friendly. A human friendly AI could help to protect humans from human unfriendly AI. Caution recommended

2