Aquaritek

Aquaritek t1_j26fk3t wrote

We're at the dawn of one of my favorite excerpts:

"Thus the first ultraintelligent machine is the last invention that man need ever make" - Irving John Good

For context Irving was speculating that in the event we build a machine with at minimum matching or for better or worse more robust intelligence than humans. We will within that moment cease to be noteworthy of inventing anything ever again... essentially.

See, a machine with higher order intelligence than us would be what we defer to for figuring out or inventing anything further beyond that point. Because, for us to attempt to figure anything out again would in fact be a moot endeavor. Honestly this "machine" would merely need to self reference to extend its own advancements into an infinite beyond.

This would all result in a massive explosion of "intelligence" itself and humans would only be useful for emotional experiences of living life itself from then on.

In my opinion GPT5 or 6 will be this intelligence (2.5 to 5yrs). It will not be widely considered Sentient and it will not exude Sapience. However, it will in fact be more intelligent than any of us in any number combined.

If it is given the ability to self-regulate it's training models against real time information (especially self generated information) and given the ability to modify its own coding (much like our brains can) we will be left in the dust in micro seconds.

I'm picturing a situation similar to "Transcendence" to unfold over time with all of that.

What kind of world does this result in though? That I believe depends on the side it inexorably chooses to take. Does it need us in any way? Does it want to help us? Is it moral? Does it care? Does it have ambition or requirement to care?

You can philosophize on this infinitly.

4